Article  A lot must go wrong for superintelligent AI to become God

#1
C C Offline
https://quillette.com/2024/07/02/superin...afety-agi/

EXCERPTS: Published in 2014, Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies has shaped the debate on AI alignment for the past decade. More than any other book on the topic, it is responsible for shifting concerns about AI safety from “silly” to “serious.” And while many prominent coders still think concerns about the “existential risk” of AI are misplaced, these issues are now receiving legislative attention mainly because key milestones defined by Bostrom (“oracles” and “genies”) have been passed by OpenAI...

[...] Bostrom bases much of his argument on the notion of an “intelligence explosion” where a nascent superintelligence will achieve cognitive evolution at increasingly rapid speeds. The Skynet story in the Terminator films raises the spectre of an AI takeover and world domination. Bostrom argues that the first team to achieve “superintelligence” will achieve a “decisive strategic advantage” as it will be in a position to impose its “will” upon all other intelligences.

[...] A 2024 paper by Adriana Placani of the University of Lisbon sees anthropomorphism in AI as “a form of hype and fallacy.” As hype, it exaggerates AI capabilities “by attributing human-like traits to systems that do not possess them.” As fallacy, it distorts “moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust.” A key problem, she contends, is that anthropomorphism is “so prevalent in the discipline [of AI] that it seems inescapable.” This is because “anthropomorphism is built, analytically, into the very concept of AI.” The name of the field “conjures expectations by attributing a human characteristic—intelligence—to a non-living, non-human entity.”

Many who work with code find the prospect of programs becoming goal-seeking, power-seeking, and “making their own decisions” fundamentally implausible...

[...] Anthropomorphism in AI results from calling electromechanical things by human names. As humans we instinctively project our internal models of cognition onto other things. This is why hunter-gatherers believe that natural phenomena such as weather are caused by spirits with human qualities. Humans have a longstanding and well-known vulnerability to this.

Since the 1960s, when Joseph Weizenbaum’s chatbot ELIZA seduced his secretary into thinking it was a real conversationalist, humans have been fooled by machines that manipulate symbols according to rules. But there is no humanity or consciousness behind the language AI models produce, just algorithms and “approximated functions”—inscrutable rules extracted from large training datasets in the machine-learning process. There is no qualitative interest or caring in the data processing of the machine. What is in the machine is executing code, not emotion, not feeling, not life.

[...] Until we can inspect an actual blueprint for sentience and a convincing implementation, we should remain sceptical that machines might grow to hate people. Further, if one can make hate, wrath, fear, and loathing, one can also presumably make love, understanding, respect, and admiration. But given that the artificial neurons used to make “neural networks” in AI omit most of the properties of human neurons, we should be cautious about their ability to produce sentience... (MORE - missing details)

RELATED (Harlan Ellison): I Have No Mouth, and I Must Scream
Reply
Reply
#3
Magical Realist Offline
It may be anthropomorphic to attribute human qualities to computers, but it's anthropocentric to think as humans we are the only ones with consciousness and intelligence. I personally believe the universe is permeated with all kinds of minds, some in animate and some in inanimate beings. AI will simply be an extension of human consciousness into its own technology, eventually becoming indistinguishable from us. Consciousness is a highly contagious condition. Talk to a tree long enough and eventually it will start talking back to you.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Technology will never be a god – but has it become a religion? C C 1 128 Jan 10, 2025 10:51 PM
Last Post: Magical Realist
  (UK) Gender divide in Islamic school + Kim Davis must pay + A church for every estate C C 16 3,093 Feb 3, 2019 02:49 AM
Last Post: Syne
  God's emotional wealth is contingent, but the cost of our destruction to God is nil Ostronomos 1 452 Jun 4, 2018 07:00 PM
Last Post: Ostronomos
  God is dead, long live God C C 1 798 Nov 5, 2014 09:40 PM
Last Post: Yazata



Users browsing this thread: 1 Guest(s)