Scivillage.com Casual Discussion Science Forum

Full Version: Do (artificial) neural networks dream visual illusions?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://www.eurekalert.org/pub_releases/...112020.php

RELEASE: A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications.

Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation. A study published in the advanced online edition of the journal Vision Research examines the phenomenon of visual illusions in convolutional networks compared to their effect on human vision. A study by Alexander Gómez Vila, Adrian Martín, Javier Vázquez-Corral and Marcelo Bertalmío, members of the Department of Information and Communication Technologies (DTIC) with the participation of the researcher Jesús Malo of the University of Valencia.

"Because of this connection of CNNs with our visual system, in this paper we wanted to see if convolutional networks suffer from similar problems to our visual system. Hence, we focused on visual illusions. Visual illusions are images that our brain perceives differently from how they actually are", explains Gómez Vila, first author of the study.

In their study, the authors trained CNNs for simple tasks also performed by human vision, such as denoising and deblurring. What they observed is that these CNNs trained under these experimental conditions are also "deceived" by brightness and colour visual illusions in the same way that visual illusions deceive humans.

Furthermore, as Gómez Villa explains, "for our work we also analyse when such illusions cause responses in the network that are not as physically expected, but neither do they match with human perception", that is to say, cases in which CNNs obtain a different optical illusion than the illusion that humans would perceive.

The results of this study are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments (that a human sees in their everyday). Meanwhile, these results highlight the limitations and differences between the human visual system and CNNs artificial neural networks.
(Nov 20, 2020 08:05 PM)C C Wrote: [ -> ]https://www.eurekalert.org/pub_releases/...112020.php

RELEASE: A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications.

Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation. A study published in the advanced online edition of the journal Vision Research examines the phenomenon of visual illusions in convolutional networks compared to their effect on human vision. A study by Alexander Gómez Vila, Adrian Martín, Javier Vázquez-Corral and Marcelo Bertalmío, members of the Department of Information and Communication Technologies (DTIC) with the participation of the researcher Jesús Malo of the University of Valencia.

"Because of this connection of CNNs with our visual system, in this paper we wanted to see if convolutional networks suffer from similar problems to our visual system. Hence, we focused on visual illusions. Visual illusions are images that our brain perceives differently from how they actually are", explains Gómez Vila, first author of the study.

In their study, the authors trained CNNs for simple tasks also performed by human vision, such as denoising and deblurring. What they observed is that these CNNs trained under these experimental conditions are also "deceived" by brightness and colour visual illusions in the same way that visual illusions deceive humans.

Furthermore, as Gómez Villa explains, "for our work we also analyse when such illusions cause responses in the network that are not as physically expected, but neither do they match with human perception", that is to say, cases in which CNNs obtain a different optical illusion than the illusion that humans would perceive.

The results of this study are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments (that a human sees in their everyday). Meanwhile, these results highlight the limitations and differences between the human visual system and CNNs artificial neural networks.


Intriguing. Do you believe that CNNs can develop perception and cognition autonomously in comparison to the way humans develop them?
(Dec 16, 2020 04:51 PM)Ostronomos Wrote: [ -> ]Intriguing. Do you believe that CNNs can develop perception and cognition autonomously in comparison to the way humans develop them?


They can be similarly "trained" liked humans, though how machine algorithms accomplish what they do is an enigma at times because they're dependent upon statistical patterns (below). The latter remains a kind of implicit information or black-box until some analyst succeeds in decoding or transcribing such to formal understanding or explicit knowledge.

AI succeeds without need of understanding, theory, causation, views about being, etc
https://www.scivillage.com/thread-8487.html

Of course, the same might be said of how the brain does what it does in terms of the processes themselves, except that people can report their own phenomenal level explanations and personal reasons for the how/why of their actions and beliefs. The scientific elaborations might be different, but the former accounts still sufficed for thousands of years. Perception, however, did remain almost "magical" even in the private context. Schools of thought and research of recent centuries did break ground there.