Do (artificial) neural networks dream visual illusions?

#1
C C Offline
https://www.eurekalert.org/pub_releases/...112020.php

RELEASE: A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications.

Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation. A study published in the advanced online edition of the journal Vision Research examines the phenomenon of visual illusions in convolutional networks compared to their effect on human vision. A study by Alexander Gómez Vila, Adrian Martín, Javier Vázquez-Corral and Marcelo Bertalmío, members of the Department of Information and Communication Technologies (DTIC) with the participation of the researcher Jesús Malo of the University of Valencia.

"Because of this connection of CNNs with our visual system, in this paper we wanted to see if convolutional networks suffer from similar problems to our visual system. Hence, we focused on visual illusions. Visual illusions are images that our brain perceives differently from how they actually are", explains Gómez Vila, first author of the study.

In their study, the authors trained CNNs for simple tasks also performed by human vision, such as denoising and deblurring. What they observed is that these CNNs trained under these experimental conditions are also "deceived" by brightness and colour visual illusions in the same way that visual illusions deceive humans.

Furthermore, as Gómez Villa explains, "for our work we also analyse when such illusions cause responses in the network that are not as physically expected, but neither do they match with human perception", that is to say, cases in which CNNs obtain a different optical illusion than the illusion that humans would perceive.

The results of this study are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments (that a human sees in their everyday). Meanwhile, these results highlight the limitations and differences between the human visual system and CNNs artificial neural networks.
Reply
#2
Ostronomos Offline
(Nov 20, 2020 08:05 PM)C C Wrote: https://www.eurekalert.org/pub_releases/...112020.php

RELEASE: A convolutional neural network is a type of artificial neural network in which the neurons are organized into receptive fields in a very similar way to neurons in the visual cortex of a biological brain. Today, convolutional neural networks (CNNs) are found in a variety of autonomous systems (for example, face detection and recognition, autonomous vehicles, etc.). This type of network is highly effective in many artificial vision tasks, such as in image segmentation and classification, along with many other applications.

Convolutional networks were inspired by the behaviour of the human visual system, particularly its basic structure formed by the concatenation of compound modules comprising a linear operation followed by a non-linear operation. A study published in the advanced online edition of the journal Vision Research examines the phenomenon of visual illusions in convolutional networks compared to their effect on human vision. A study by Alexander Gómez Vila, Adrian Martín, Javier Vázquez-Corral and Marcelo Bertalmío, members of the Department of Information and Communication Technologies (DTIC) with the participation of the researcher Jesús Malo of the University of Valencia.

"Because of this connection of CNNs with our visual system, in this paper we wanted to see if convolutional networks suffer from similar problems to our visual system. Hence, we focused on visual illusions. Visual illusions are images that our brain perceives differently from how they actually are", explains Gómez Vila, first author of the study.

In their study, the authors trained CNNs for simple tasks also performed by human vision, such as denoising and deblurring. What they observed is that these CNNs trained under these experimental conditions are also "deceived" by brightness and colour visual illusions in the same way that visual illusions deceive humans.

Furthermore, as Gómez Villa explains, "for our work we also analyse when such illusions cause responses in the network that are not as physically expected, but neither do they match with human perception", that is to say, cases in which CNNs obtain a different optical illusion than the illusion that humans would perceive.

The results of this study are consistent with the long-standing hypothesis that considers low-level visual illusions as a by-product of the optimization to natural environments (that a human sees in their everyday). Meanwhile, these results highlight the limitations and differences between the human visual system and CNNs artificial neural networks.


Intriguing. Do you believe that CNNs can develop perception and cognition autonomously in comparison to the way humans develop them?
Reply
#3
C C Offline
(Dec 16, 2020 04:51 PM)Ostronomos Wrote: Intriguing. Do you believe that CNNs can develop perception and cognition autonomously in comparison to the way humans develop them?


They can be similarly "trained" liked humans, though how machine algorithms accomplish what they do is an enigma at times because they're dependent upon statistical patterns (below). The latter remains a kind of implicit information or black-box until some analyst succeeds in decoding or transcribing such to formal understanding or explicit knowledge.

AI succeeds without need of understanding, theory, causation, views about being, etc
https://www.scivillage.com/thread-8487.html

Of course, the same might be said of how the brain does what it does in terms of the processes themselves, except that people can report their own phenomenal level explanations and personal reasons for the how/why of their actions and beliefs. The scientific elaborations might be different, but the former accounts still sufficed for thousands of years. Perception, however, did remain almost "magical" even in the private context. Schools of thought and research of recent centuries did break ground there.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Testng advanced AI: How good is it at solving visual puzzles & abstract reasoning? C C 0 19 Oct 10, 2024 04:00 AM
Last Post: C C
  Article AI will surpass human brains once we crack the ‘neural code’ C C 2 326 Sep 11, 2024 11:08 PM
Last Post: confused2
  Article Some neural networks learn language like humans C C 0 107 May 24, 2023 02:31 PM
Last Post: C C
  Neural puppeteer: AI reconstructs motion sequences of humans & animals C C 0 94 Mar 9, 2023 02:37 AM
Last Post: C C
  Writing a range-for loop in a container on Microsoft visual studios 2022 Ostronomos 1 161 Aug 31, 2022 12:11 AM
Last Post: Syne
  New research suggests there are limitations to what deep neural networks can do C C 0 88 Mar 30, 2022 05:27 PM
Last Post: C C
  Foundations built for a General Theory of Neural Networks C C 0 463 Feb 1, 2019 07:11 PM
Last Post: C C
  Meet the Man Who Has Been Working on Elon Musk’s Neural Lace (interview) C C 0 515 Apr 8, 2018 06:11 PM
Last Post: C C
  AI beats humans on visual test + AI Consciousness - a reply to Schwizgebel C C 1 616 Feb 1, 2017 05:03 PM
Last Post: Ben the Donkey
  Are Social Networks Vulnerable to Manipulation? Bowser 5 1,260 Sep 17, 2016 10:48 PM
Last Post: Bowser



Users browsing this thread: 1 Guest(s)