Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

New Theory Cracks Open the Black Box of Deep Learning

#1
C C Offline
https://www.quantamagazine.org/new-theor...-20170921/

EXCERPT: [...] Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

[...] Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied. Tishby’s findings have the AI community buzzing....

MORE: https://www.quantamagazine.org/new-theor...-20170921/
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Open-source AI is uniquely dangerous C C 0 51 Jan 14, 2024 08:42 AM
Last Post: C C
  Article Want to find UFOs? That's a job for machine learning C C 1 117 Aug 22, 2023 10:25 PM
Last Post: Magical Realist
  Don’t worry about AI breaking out of its box—worry about us breaking in C C 6 210 May 19, 2023 05:08 AM
Last Post: Yazata
Smile Open AI GPT-4 Outperforms Most Humans on University Entrance Exams, Bar Exam etc. Yazata 0 84 Mar 15, 2023 05:59 AM
Last Post: Yazata
  Solving a machine-learning mystery + Can pigeons match wits with AI? C C 0 62 Feb 13, 2023 07:00 PM
Last Post: C C
  New technology to reduce potholes (machine learning technique) C C 0 136 Nov 5, 2022 04:40 AM
Last Post: C C
  Approach to demystify black box AI not ready for prime time C C 0 136 Oct 11, 2022 05:54 PM
Last Post: C C
  Dealing with online toxic speech by selecting the decision makers (machine learning) C C 0 73 Jun 8, 2022 07:27 PM
Last Post: C C
  New research suggests there are limitations to what deep neural networks can do C C 0 62 Mar 30, 2022 05:27 PM
Last Post: C C
  Machine learning reimagines the building blocks of computing C C 0 84 Mar 16, 2022 05:18 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)