Scivillage.com Casual Discussion Science Forum

Full Version: Elon Musk is the A.I. threat: Why Musk is trying to convince us that A.I. is evil
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
The Tesla CEO called for a pause in chatbot development. But he’s pushing something much more dangerous.
https://slate.com/technology/2023/03/elo...tesla.html

INTRO: For much of the past decade, Elon Musk has regularly voiced concerns about artificial intelligence, worrying that the technology could advance so rapidly that it creates existential risks for humanity. Though seemingly unrelated to his job making electric vehicles and rockets, Musk’s A.I. Cassandra act has helped cultivate his image as a Silicon Valley seer, tapping into the science-fiction fantasies that lurk beneath so much of startup culture.

Now, with A.I. taking center stage in the Valley’s endless carnival of hype, Musk has signed on to a letter urging a moratorium on advanced A.I. development until “we are confident that their effects will be positive and their risks will be manageable,” seemingly cementing his image as a force for responsibility amid high technology run amok.

Don’t be fooled. Existential risks are central to Elon Musk’s personal branding, with various Crichtonian scenarios underpinning his pitches for Tesla, SpaceX, and his computer-brain-interface company Neuralink. But not only are these companies’ humanitarian “missions” empty marketing narratives with no real bearing on how they are run, Tesla has created the most immediate—and lethal—“A.I. risk” facing humanity right now, in the form of its driving automation.

By hyping the entirely theoretical existential risk supposedly presented by large language models (the kind of A.I. model used, for example, for ChatGPT), Musk is sidestepping the risks, and actual damage, that his own experiments with half-baked A.I. systems have created.

The key to Musk’s misdirection is humanity’s primal paranoia about machines. Because humans evolved beyond the control of gods and nature, overthrowing them and harnessing them to our wills, so too do we fear that our own creations will return the favor. That this archetypal suspicion has become a popular moral panic at this precise moment may or may not be justified, but it absolutely distracts us from the very real A.I. risk that Musk has already unleashed.

That risk isn’t an easy-to-point-to villain—a Skynet, a HAL—but rather a flavor of risk we are all too good at ignoring: the kind that requires our active participation. The fear should not be that A.I. surpasses us out of sheer intelligence, but that it dazzles us just enough to trust it, and by doing so we endanger ourselves and others. The risk is that A.I. lulls us into such complacency that we kill ourselves and others... (MORE - details)
Niedermeyer (what's in a name) has certainly raised real issues, but a longstanding fiercely critical focus on Elon arouses a certain suspicion.
Having skimmed through his other Musk hit piece https://slate.com/technology/2022/05/elo...ables.html
I can't help but wonder if there is a hidden agenda. Obliquely undermining Musk's twitter initiative.
That ostensibly at least seeks to break the internet censorship/anti-freedom of speech monopoly of traditional 'big tech'.
The ones aggressively leveraging AI to 'weed out' dissident voices from supposedly open to all platforms.