Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Shut down AI completely: the 6-month moratorium doesn't go far enough

#1
C C Offline
https://time.com/6266923/ai-eliezer-yudk...ot-enough/

EXCERPTS (Eliezer Yudkowsky): An open letter published March 29 calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.

The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die...

[...] Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “Australopithecus trying to fight Homo sapiens“.

[...] It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead...

[...] Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.

We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.

Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.

Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.”

[...] Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down... (MORE - missing details)

RELATED (scivillage): Don’t worry about AI breaking out of its box—worry about us breaking in
Reply
#2
stryder Offline
Artificial Intelligence requires running on a Neural Network, that means banks of computers with data collected for it to use (if not given the opportunity to grow exponentially).  The problem is however that over time AI would consume all it's resources so would require expanding. 

Now it's possible to consider that if it roguely created bot networks and bitcoin miners it could create an infrastructure it can move into, however you have to consider that everyone one of those nodes (computers/servers etc) would require someone physical to setup and maintain.

There is also the problem that the nodes within the neural network don't exactly identify if the network is actually "secure". This means that the "neurons" aren't signed, registered or have some form of hash to identify that it's legitimate to the network, which means imposter nodes could be implanted into a network to undermine an AI's development.

(It could be possible to create an AI that actually attempts to devour the nodes similar to the Naegleria fowleri amoeba. That would mean creating an AI that only attacks an individual node, is capable of spreading but would attempt to devour itself if it attempted to spread to a node it already occupies, so the nodes stay singular and not part of a network.)

So in a nutshell, peoples concerns of how out of control AI could get aren't as high as they might think as there is always ways and options to deal with an "break-out".
Reply




Users browsing this thread: 1 Guest(s)