Scivillage.com Casual Discussion Science Forum
"The apocalyptic threat from artificial intelligence isn’t science fiction" proposal - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Science (https://www.scivillage.com/forum-61.html)
+--- Forum: Alternative Theories (https://www.scivillage.com/forum-130.html)
+--- Thread: "The apocalyptic threat from artificial intelligence isn’t science fiction" proposal (/thread-9456.html)



"The apocalyptic threat from artificial intelligence isn’t science fiction" proposal - C C - Dec 3, 2020

https://quillette.com/2020/12/03/the-apocalyptic-threat-from-artificial-intelligence-isnt-science-fiction/

EXCERPTS (James D. Miller): . . . In building artificial intelligences, we have critical advantages over the evolutionary forces that shaped the human brain. [...] The technology is self-accelerating ... shortly after computers exceed humans in programming and computer-chip design ability, we might experience an intelligence explosion whereby smart AIs design smarter still AIs, that in turn design even smarter AIs. ... the AIs would so exceed humans in intelligence that they would effectively, compared to us, be godly in their powers. It would rapidly become impossible for humans to meaningfully monitor or control this process.

Poverty, unhappiness, disease, and perhaps even death itself could be vanquished. Alas, the most likely outcome of creating artificial superintelligence is our extermination. Certainly, a God-like superintelligence set free in the world could destroy us if it had the desire to do so (or what would pass for “desire” in its manner of cognition). [...] Simultaneously releasing a few dozen plagues would leave the resources of the Earth at its disposal, free of human interference.

[...] Specifically, we have no reasonable hope that the mere fact of superintelligence would cause an AI to develop a concept of morality that valued human wellbeing. Most humans, certainly, care little about most creatures of lesser intelligence...

Imagine that an AI has some objective that seems silly to us. The standard example used in the AI safety literature is an AI that wants to maximize the number of paperclips in the universe. [...] if all the AI cares about is paperclip maximization, for reasons that are perfectly rational to it, what could any feeble-minded meatsack say to get it to change its mind?

No one actually thinks that an AI will become a paperclip maximizer. But a computer superintelligence would likely develop goals, and we cannot begin to imagine in advance what they might be or how they would affect us. For many such goals, it would want to acquire resources. [...] As pioneering AI-safety theorist Eliezer Yudkowsky has written “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

The solution therefore, seems simple: We don’t create an AI superintelligence until we are certain that we know how to make it friendly. Unfortunately, smarter and smarter AIs offer tempting economic and military benefits. For that reason alone, we are likely to continue to improve our AIs [...] Even if we were trying to be careful, there is no clear red line for us to avoid crossing.

[...] Fake pictures of people are created using so-called adversarial machine learning, in which two computer programs compete against each other. One program is given many correctly labelled real and fake pictures, and self-programs the patterns that allow it to determine which is which. It’s then provided with unlabeled data, to see if it really has figured out how to identify fake pictures. At this stage, a second program is tasked with creating fake pictures that will fool the first program into thinking that the pictures are of a real person. This second program is, in turn, provided with data on which pictures the first program correctly or incorrectly deemed fake. The interplay between these programs produces lots of training data that can be used to refine both designs, thereby leading to more and more realistic pictures from the second program, and forcing the first program to improve itself as well. And so on.

Even before this week’s protein-folding news from AlphaFold, it seemed likely that future historians might remember 2020 for our advances in machine learning more than for the COVID-19 pandemic. The biggest breakthrough was GPT-3, a machine-learning program trained on the Internet with the goal of learning human language. [...] The success of GPT-3 emerged from the bigger statistical model than was used to inform GPT-2...

The AI-safety community’s backup plan for what to do if we develop a computer superintelligence before we have figured out how to reliably make it friendly is to turn it into an oracle—a thinking entity with extremely limited access to the outside world, and whose reactive intelligence would be limited to answering human-posed questions...

I recently co-authored an academic paper titled ‘Chess as a Testing Grounds for the Oracle Approach to AI Safety.’ My co-authors and I propose that we take advantage of AIs that already have superhuman ability in the domain of chess, and deliberately create deceptive AI-chess oracles that give bad advice. [...] The goal of putting humans in an environment with trustworthy and untrustworthy chess oracles would be to see if we could formulate strategies for determining who we can believe. Moreover, playing with deceptive narrow-domain superintelligences might give us useful practice for interacting with future AIs.

This may all sound remote and far-fetched. But it’s important for the public to understand that we have already likely passed the point where superintelligent AIs are technically feasible, or soon will be. The time to begin planning our response, and designing systems to give humanity a fighting chance, is now... (MORE - details)


RE: "The apocalyptic threat from artificial intelligence isn’t science fiction" proposal - Yazata - Mar 15, 2023


[Image: FrO06TcWYAQ_UXj?format=jpg&name=large]
[Image: FrO06TcWYAQ_UXj?format=jpg&name=large]