https://spectrum.ieee.org/deep-neural-network
INTRO: Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.
These findings might help shed light on what is and is not actually possible with AI, the scientists add.
In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. A neural network is dubbed "deep" if it possesses multiple layers of neurons.
Although deep neural networks are being used for increasingly practical applications such as analyzing medical scans and empowering autonomous vehicles, there is now overwhelming evidence that they can often prove unstable—that is, a slight alteration in the data they receive can lead to a wild change in outcomes. For example, previous research found that changing a single pixel on an image can make an AI think a horse is a frog, and medical images can get modified in a way imperceptible to the human eye such that would cause an AI to misdiagnose cancer 100 percent of the time.
Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them... (MORE - details)
PAPER: https://www.pnas.org/doi/10.1073/pnas.2107151119
INTRO: Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.
These findings might help shed light on what is and is not actually possible with AI, the scientists add.
In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in the human brain. A neural network is dubbed "deep" if it possesses multiple layers of neurons.
Although deep neural networks are being used for increasingly practical applications such as analyzing medical scans and empowering autonomous vehicles, there is now overwhelming evidence that they can often prove unstable—that is, a slight alteration in the data they receive can lead to a wild change in outcomes. For example, previous research found that changing a single pixel on an image can make an AI think a horse is a frog, and medical images can get modified in a way imperceptible to the human eye such that would cause an AI to misdiagnose cancer 100 percent of the time.
Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them... (MORE - details)
PAPER: https://www.pnas.org/doi/10.1073/pnas.2107151119