Article  AI will surpass human brains once we crack the ‘neural code’

#1
C C Offline
https://newsroom.taylorandfrancisgroup.c...ural-code/

PRESS RELEASE: Humans will build Artificial Intelligence (AI) which surpasses our own capabilities once we crack the ‘neural code’, says an AI technology analyst.

Eitan Michael Azoff, a specialist in AI analysis, argues that humans are set to engineer superior intelligence with greater capacity and speed than our own brains.

What will unlock this leap in capability is understanding the ‘neural code’, he explains. That’s how the human brain encodes sensory information, and how it moves information in the brain to perform cognitive tasks, such as thinking, learning, problem-solving, internal visualisation, and internal dialogue.

In his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence, Azoff says that one of the critical steps towards building ‘human-level AI’ is emulating consciousness in computers.

Computers can simulate consciousness. There are multiple types of consciousness, and scientists acknowledge that even simpler animals such as bees possess a degree of consciousness. This is mostly consciousness without self-awareness, the nearest we humans experience that is when we are totally focused on a task, being “in the flow”.

Computer simulation can create a virtual brain that as a first step could emulate consciousness without self-awareness, believes Azoff. Consciousness without self-awareness helps animals plan actions, predict possible events and recall relevant incidents from the past, and it could do the same for AI.

Visual thinking could also be the key to unlocking the mystery of what is consciousness. Current AI does not ‘think’ visually; it uses ‘large language models’ (LLMs). As visual thinking predated language in humans, Azoff suggests that understanding visual thinking and then modelling visual processing will be a crucial building block for human-level AI.

Azoff says: “Once we crack the neural code, we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain. We will do that first by modelling visual processing, which will enable us to emulate visual thinking. I speculate that in-the-flow-consciousness will emerge from that. I do not believe that a system needs to be alive to have consciousness.”

But Azoff issues a warning too, saying that society must act to control this technology and prevent its misuse: “Until we have more confidence in the machines we build, we should ensure the following two points are always followed. First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behaviour safety rules implanted.”
Reply
#2
stryder Offline
(Sep 9, 2024 05:43 PM)C C Wrote: https://newsroom.taylorandfrancisgroup.c...ural-code/

PRESS RELEASE: Humans will build Artificial Intelligence (AI) which surpasses our own capabilities once we crack the ‘neural code’, says an AI technology analyst.

Eitan Michael Azoff, a specialist in AI analysis, argues that humans are set to engineer superior intelligence with greater capacity and speed than our own brains.

What will unlock this leap in capability is understanding the ‘neural code’, he explains. That’s how the human brain encodes sensory information, and how it moves information in the brain to perform cognitive tasks, such as thinking, learning, problem-solving, internal visualisation, and internal dialogue.

In his new book, Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence, Azoff says that one of the critical steps towards building ‘human-level AI’ is emulating consciousness in computers.

Computers can simulate consciousness. There are multiple types of consciousness, and scientists acknowledge that even simpler animals such as bees possess a degree of consciousness. This is mostly consciousness without self-awareness, the nearest we humans experience that is when we are totally focused on a task, being “in the flow”.

Computer simulation can create a virtual brain that as a first step could emulate consciousness without self-awareness, believes Azoff. Consciousness without self-awareness helps animals plan actions, predict possible events and recall relevant incidents from the past, and it could do the same for AI.

Visual thinking could also be the key to unlocking the mystery of what is consciousness. Current AI does not ‘think’ visually; it uses ‘large language models’ (LLMs). As visual thinking predated language in humans, Azoff suggests that understanding visual thinking and then modelling visual processing will be a crucial building block for human-level AI.

Azoff says: “Once we crack the neural code, we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain. We will do that first by modelling visual processing, which will enable us to emulate visual thinking. I speculate that in-the-flow-consciousness will emerge from that. I do not believe that a system needs to be alive to have consciousness.”

But Azoff issues a warning too, saying that society must act to control this technology and prevent its misuse: “Until we have more confidence in the machines we build, we should ensure the following two points are always followed. First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behaviour safety rules implanted.”

For true consciousness, it takes a bit more than just spacial thinking. Evolution occured from reactions at a chemistry level, in other words sensation, specifically the taxis there of. (Both good and bad feelings from reactions) It's what aided in knowing when something should be done or not, and reduced the occurance of using numbers to beat Darwins "survival of the fittest". (Intelligence over brute-force and ignorance.)

To entwine the amount of data necessary from sensory appraisal in the short term memory, historic occurances applied from the long-term memory and of course the "hallucination" of spacial awareness to be blended into the current "moment", is something that would be tricky currently with AI.
Reply
#3
confused2 Offline
From what seems to be the press release for the book
https://www.amazon.co.uk/Toward-Human-Le...1032829079

Quote:A neuromodulation feature such as a machine equivalent of dopamine that reinforces learning.
Yes!
Quote:The embodied HLAI machine, a neurorobot, that interacts with the physical world as it learns.
So also fight-or-flight .. interesting.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article "Schrödinger’s cat" code makes for better qubits in quantum computation C C 0 102 Jun 8, 2023 07:15 PM
Last Post: C C
  Article Some neural networks learn language like humans C C 0 107 May 24, 2023 02:31 PM
Last Post: C C
  Neural puppeteer: AI reconstructs motion sequences of humans & animals C C 0 94 Mar 9, 2023 02:37 AM
Last Post: C C
  Pre-written code for apps that display text alone? Ostronomos 3 161 Jan 15, 2023 10:23 PM
Last Post: Syne
  New research suggests there are limitations to what deep neural networks can do C C 0 88 Mar 30, 2022 05:27 PM
Last Post: C C
  How bodies get smarts + New link to an old model could crack mystery of deep learning C C 0 124 Oct 12, 2021 05:24 PM
Last Post: C C
  Do (artificial) neural networks dream visual illusions? C C 2 310 Dec 16, 2020 06:45 PM
Last Post: C C
  Breakthrough for quantum computers? Engineers crack 58-year-old challenge C C 0 465 Mar 15, 2020 07:27 PM
Last Post: C C
  Foundations built for a General Theory of Neural Networks C C 0 463 Feb 1, 2019 07:11 PM
Last Post: C C
  Call For Code Secular Sanity 1 395 Aug 9, 2018 04:38 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)