Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Algorithms cannot contain a harmful artificial intelligence

#1
C C Offline
https://www.mpg.de/16231640/0108-bild-co...5-x?c=2249

RELEASE: We are fascinated by machines that can control cars, compose symphonies, or defeat people at chess, Go, or Jeopardy! While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI.

Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development

Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world – yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, says Iyad Rahwan, Director of the Center for Humans and Machines.

Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

The study “Superintelligence cannot be contained: Lessons from Computability Theory“ was published in the Journal of Artificial Intelligence Research. Other researchers on the study include Andres Abeliuk from the University of Southern California, Manuel Alfonseca from the Autonomous University of Madrid, Antonio Fernandez Anta from the IMDEA Networks Institute and Lorenzo Coviello.
Reply
#2
stryder Offline
As humans we develop our perspective of self on our interactions and memory of interactions over the years. In regards to building an AI, perhaps it should be considered that the capacity to develop memory and the perception of how those memories develop it could be viewed. For instance if an AI decided that it didn't like hockeysticks for the number of Boston Dynamics earlier videos, then it could already be consider that such an AI might develop hostility to anything Hockeystick related.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Quotes about Artificial Intelligence Magical Realist 29 848 Jan 3, 2024 10:25 AM
Last Post: stryder
  Article Artificial intelligence: Four debates to expect in 2024 C C 0 49 Jan 3, 2024 02:01 AM
Last Post: C C
  Article An earlier AI was modeled on a psychopath. Biased algorithms are still a major issue. C C 0 76 Sep 22, 2023 06:18 PM
Last Post: C C
  Article A new approach to computation reimagines artificial intelligence C C 1 76 Apr 15, 2023 08:44 AM
Last Post: Kornee
  Article Artificial intelligence finds the first stars were not alone C C 0 49 Mar 27, 2023 07:14 PM
Last Post: C C
  The danger of advanced artificial intelligence controlling its own feedback C C 0 141 Oct 25, 2022 08:21 PM
Last Post: C C
  How artificial intelligence can explain its decisions C C 0 127 Sep 3, 2022 10:37 PM
Last Post: C C
  Will transformers take over artificial intelligence? C C 0 78 Mar 11, 2022 07:24 PM
Last Post: C C
  Consciousness in humans, animals and artificial intelligence C C 0 98 Dec 21, 2021 09:41 PM
Last Post: C C
  Artificial Intelligence that can discover hidden physical laws in various data C C 0 60 Dec 11, 2021 05:08 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)