The AI containment problem + Space aliens conquer planets with malware, not warships

#1
C C Offline
Space aliens would conquer Earth with malware, not Klingon warbirds (spaceships)
https://www.vox.com/2022/6/18/23172689/a...-risk-seti

EXCERPTS: In the 1961 sci-fi drama A for Andromeda, written by the British cosmologist Fred Hoyle, a group of scientists running a radio telescope receive a signal originating from the Andromeda Nebula in outer space. They realize the message contains blueprints for the development of a highly advanced computer that generates a living organism called Andromeda.

Andromeda is quickly co-opted by the military for its technological skills, but the scientists discover that its true purpose — and that of the computer and the original signal from space — is to subjugate humanity and prepare the way for alien colonization.

[...] it outlines a scenario that some scientists believe could represent a real existential threat from outer space, one that takes advantage of the very curiosity that leads us to look to the stars. If highly advanced aliens really wanted to conquer Earth, the most effective way likely wouldn’t be through fleets of warships crossing the stellar vastness. It would be through information that could be sent far faster. Call it “cosmic malware.”

[...] the chance that aliens would be physically visiting Earth is vanishingly small. The reason is simple: Space is big. Like, really, really, really big. And the idea that after decades of searching for ET with no success, there could be alien civilizations capable of crossing interstellar distances and showing up on our planetary doorstep beggars belief.

But transmitting gigabytes of data across those vast interstellar distances would be comparatively easy. After all, human beings have been doing a variation of that for decades through what is known as active messaging.

[...] In a 2012 paper, the Russian transhumanist Alexey Turchin described what he called “global catastrophic risks of finding an extraterrestrial AI message” during the search for intelligent life. The scenario unfolds similarly to the plot of A for Andromeda...

[...] The result is a phishing attempt on a cosmic scale. Just like a malware attack that takes over a user’s computer, the advanced alien AI could quickly take over the Earth’s infrastructure — and us with it...

[...] What can we do to protect ourselves? Well, we could simply choose not to build the alien computer. But Turchin assumes that the message would also contain “bait” in the form of promises that the computer could, for example, solve our biggest existential challenges or provide unlimited power to those who control it.

Geopolitics would play a role as well. Just as international competition has led nations in the past to embrace dangerous technologies — like nuclear weapons — out of fear that their adversaries would do so first, the same could happen again in the event of a message from space. How confident would policymakers in Washington be that China would safely handle such a signal if it received one first — or vice versa? (MORE - missing details)
- - - - - -

And as the film Species illustrated, space aliens could also populate and colonize Earth via information transmission. No bodily travel across interstellar space necessary. Their original civilization could have vanished thousand of years ago by the time transmissions reached Earth, but still be reborn thanks to the gullible humans.


The AI containment problem
https://iai.tv/articles/the-ai-containme..._auid=2020

INTRO: Elon Musk plans to build his Tesla Bot, Optimus, so that humans “can run away from it and most likely overpower it” should they ever need to. “Hopefully, that doesn’t ever happen, but you never know,” says Musk. But is this really enough to make an AI safe? The problem of keeping AI contained, and only doing the things we want it to, is a deceptively tricky one, writes Roman V. Yampolskiy.

With the likely development of superintelligent programs in the near future, many scientists have raised the issue of safety as it relates to such technology. A common theme in Artificial Intellgence (AI) safety research is the possibility of keeping a super-intelligent agent in a sealed hardware so as to prevent it from doing any harm to humankind.

In this essay we will review specific proposals aimed at creating restricted environments for safely interacting with artificial minds. We will evaluate feasibility of presented proposals and suggest a protocol aimed at enhancing safety and security of such methodologies. While it is unlikely that long-term and secure confinement of AI is possible, we are hopeful that the proposed protocol will give researchers a little more time to find a permanent and satisfactory solution for addressing existential risks associated with appearance of super-intelligent machines.

1. The Artificial Intelligence Confinement Problem

Interestingly, the AI Confinement Problem is not a recent invention and does not have its roots in the singularity movement. The so-called Confinement Problem (CP) was posed by Butler Lampson in 1973 (Lampson, 1973) as a security challenge to the computer experts. It was originally phrased as: ‘...the problem of confining a program during its execution so that it cannot transmit information to any other program except its caller. ...We want to be able to confine an arbitrary program.... any program, if confined, will be unable to leak data. A misbehaving program may well be trapped as a result of an attempt to escape’.

To address the Confinement Problem Lampson has introduced the Laws of Confinement:

1.    Total isolation: A confined program shall make no calls on any other program.

2.    Transitivity: If a confined program calls another program which is not trusted, the called program must also be confined.

3.    Masking: A program to be confined must allow its caller to determine all its inputs into legitimate and covert channels.

4.    Enforcement: The supervisor must ensure that a confined program’s input to covert channels conforms to the caller’s specifications.

Lampson’s paper has also introduced and defined two types of channels by which information could potentially escape from the confinement environment. The legitimate channels are designed as a part of the communication protocol integrated into the confinement environment, for example a text terminal.

Covert channels, on the other hand, are not anticipated by the confinement system designers and are not intended for information transfer at all, for example if the AI has control over the processor cooling fan it can use it to send hidden signals encoded as Morse code.

In the hopes of starting a new subfield of computer security, AI Safety Engineering, we define the Artificial Intelligence Confinement Problem (AICP) as the challenge of restricting an artificially intelligent entity to a confined environment from which it can’t exchange information with the outside environment via legitimate or covert channels if such information exchange was not authorized by the confinement authority. An AI system which succeeds in violating the CP protocol is said to have escaped. It is our hope that the computer security researchers will take on the challenge of designing, enhancing and proving secure AI confinement protocols... (MORE - details)
Reply
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The problem of meaning in AI + The problem of AI consciousness C C 1 392 Jan 16, 2017 02:47 AM
Last Post: Syne



Users browsing this thread: 1 Guest(s)