New research suggests there are limitations to what deep neural networks can do

#1
C C Offline
https://spectrum.ieee.org/deep-neural-network

INTRO: Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.

These findings might help shed light on what is and is not actually possible with AI, the scientists add.

In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. A neural network is dubbed "deep" if it possesses multiple layers of neurons.

Although deep neural networks are being used for increasingly practical applications such as analyzing medical scans and empowering autonomous vehicles, there is now overwhelming evidence that they can often prove unstable—that is, a slight alteration in the data they receive can lead to a wild change in outcomes. For example, previous research found that changing a single pixel on an image can make an AI think a horse is a frog, and medical images can get modified in a way imperceptible to the human eye such that would cause an AI to misdiagnose cancer 100 percent of the time.

Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them... (MORE - details)

PAPER: https://www.pnas.org/doi/10.1073/pnas.2107151119
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article AI will surpass human brains once we crack the ‘neural code’ C C 2 326 Sep 11, 2024 11:08 PM
Last Post: confused2
  Research New theory suggests chatbots can understand text C C 1 124 Jan 24, 2024 01:49 AM
Last Post: confused2
  Verbal nonsense reveals limitations of AI chatbots + Robot consensus + AI outperforms C C 1 166 Sep 15, 2023 11:52 PM
Last Post: confused2
  Article Some neural networks learn language like humans C C 0 107 May 24, 2023 02:31 PM
Last Post: C C
  Neural puppeteer: AI reconstructs motion sequences of humans & animals C C 0 94 Mar 9, 2023 02:37 AM
Last Post: C C
  How bodies get smarts + New link to an old model could crack mystery of deep learning C C 0 124 Oct 12, 2021 05:24 PM
Last Post: C C
  Do (artificial) neural networks dream visual illusions? C C 2 310 Dec 16, 2020 06:45 PM
Last Post: C C
  Foundations built for a General Theory of Neural Networks C C 0 463 Feb 1, 2019 07:11 PM
Last Post: C C
  Meet the Man Who Has Been Working on Elon Musk’s Neural Lace (interview) C C 0 515 Apr 8, 2018 06:11 PM
Last Post: C C
  New Theory Cracks Open the Black Box of Deep Learning C C 0 353 Sep 22, 2017 10:38 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)