Catch up on the latest AI articles

Does CNN Really Like Textures?

Does CNN Really Like Textures?

Image Recognition

3 main points
✔️ Findings on texture bias 
✔️ Basically, CNNs have a texture bias. 
✔️ It also turns out that it's not that we don't have shape information.

The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
written by Katherine L. HermannTing ChenSimon Kornblith
(Submitted on 20 Nov 2019 (v1), last revised 29 Jun 2020 (this version, v2))

Comments: Published by arXiv
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neurons and Cognition (q-bio.NC)


Convolutional Neural Networks (CNNs) have been at the forefront of performance in various fields such as image classification and object detection. Their performance has been so high that they have been able to outperform humans in the field of research. Interestingly, even though CNN was invented to mimic human visual processes, it differs from the human vision in a number of ways. A typical example is that humans prefer to shape information in the classification problem, whereas CNNs prefer texture information. The image below shows a cat's shape information with an elephant's texture information. Humans prefer the shape information and judge it like a cat, while CNN prefers the texture information and judges it as an elephant. 
Also, the term texture bias is used to refer to a preference for texture over shape, and shape bias is used to refer to a preference for shape over texture.

It is also said that these texture biases are responsible for the phenomenon of the adversarial examples problem. So the reason we are susceptible to small perturbations is that we prefer texture information. It could also be said that the preference for textures indicates an inductive bias (the hypothesis that machine learning methods employ for generalization is out of sync with real-world conditions). It's not surprising that there is an inductive bias because CNNs prefer textures even for tasks where shape information is important, so it's not surprising that there is an inductive bias.

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

加藤 avatar
AI-SCHOLAR is a commentary media that introduces the latest articles on AI (artificial intelligence) in an easy-to-understand manner. The role of AI is not limited to technological innovation, as Japan's scientific capabilities are declining and the government continues to cut back on research budgets. Communicating with the world the technology of AI, its applications, and the context of the basic science that supports it is an important outreach, and can greatly influence society's understanding and impression of science. AI-SCHOLAR is designed to help eliminate the gaps in understanding of AI between the general public and experts, and to contribute to the integration of AI into society. In addition, we would like to help you embody your learning and research experiences in the media and express them in society. Anyone can explain advanced and difficult matters in difficult terms, but AI-SCHOLAR pursues "readability” and "comprehensibility" by making full use of vocabulary and design in conveying information as a medium.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us