Research  Unlocking the ‘black box’: scientists reveal AI’s hidden thoughts

#1
C C Offline
RELATED: AI's mysterious ‘black box’ problem, explained
- - - - - - - - - - - - -

Unlocking the ‘black box’: scientists reveal AI’s hidden thoughts
https://www.eurekalert.org/news-releases/1068008

INTRO: Deep neural networks are a type of artificial intelligence (AI) that imitate how human brains process information, but understanding how these networks “think” has long been a challenge. Now, researchers at Kyushu University have developed a new method to understand how deep neural networks interpret information and sort it into groups. Published in IEEE Transactions on Neural Networks and Learning Systems, the study addresses the important need to ensure AI systems are accurate and robust and can meet the standards required for safe use.

Deep neural networks process information in many layers, similarly to humans solving a puzzle step by step. The first layer, known as the input layer, brings in the raw data. The subsequent layers, called hidden layers, analyze the information. Early hidden layers focus on basic features, such as detecting edges or textures—like examining individual puzzle pieces. Deeper hidden layers combine these features to recognize more complex patterns, such as identifying a cat or a dog—similar to connecting puzzle pieces to reveal the bigger picture.

“However, these hidden layers are like a locked black box: we see the input and output, but what is happening inside is not clear,” says Danilo Vasconcellos Vargas, Associate Professor from the Faculty of Information Science and Electrical Engineering at Kyushu University. "This lack of transparency becomes a serious problem when AI makes mistakes, sometimes triggered by something as small as changing a single pixel. AI might seem smart, but understanding how it comes to its decision is key to ensuring it’s trustworthy.”

Currently, methods for visualizing how AI organizes information rely on simplifying high-dimensional data into 2D or 3D representations. These methods let researchers observe how AI categorizes data points—for example, grouping images of cats close to other cats while separating them from dogs. However, this simplification comes with critical limitations.

“When we simplify high-dimensional information into fewer dimensions, it’s like flattening a 3D object into 2D—we lose important details and fail to see the whole picture. Additionally, this method of visualizing how the data is grouped makes it difficult to compare between different neural networks or data classes,” explains Vargas.

In this study, the researchers developed a new method, called the k* distribution method, that more clearly visualizes and assesses how well deep neural networks categorize related items together... (MORE - details, no ads)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Unmasking human trafficking: New AI research reveals hidden recruitment networks C C 0 423 May 25, 2025 07:54 PM
Last Post: C C
  Don’t worry about AI breaking out of its box—worry about us breaking in C C 6 945 May 19, 2023 05:08 AM
Last Post: Yazata
  Approach to demystify black box AI not ready for prime time C C 0 366 Oct 11, 2022 05:54 PM
Last Post: C C
  Artificial Intelligence that can discover hidden physical laws in various data C C 0 371 Dec 11, 2021 05:08 AM
Last Post: C C
  Experiments reveal why human-like robots elicit uncanny feelings C C 2 580 Sep 12, 2020 11:57 PM
Last Post: Syne
  Uncovering the hidden “noise” that can kill qubits C C 0 404 Sep 18, 2019 02:12 AM
Last Post: C C
  New Theory Cracks Open the Black Box of Deep Learning C C 0 513 Sep 22, 2017 10:38 PM
Last Post: C C
  Physical nature of computers may reveal deep truths + How machines learn and you win C C 0 979 Dec 6, 2015 10:06 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)