Scivillage.com Casual Discussion Science Forum

Full Version: What Would AI Have to Worry About?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Surely there are some things that AI will have to overcome if they're going to take over. I always wondered what damage water could do for instance. What about microbial.corrsion?

https://en.m.wikipedia.org/wiki/Microbial_corrosion

Is the Earth a good place for intelligent machines or would they be better off on a dead  planet?
(Jun 22, 2019 01:12 PM)Zinjanthropos Wrote: [ -> ]Surely there are some things that AI will have to overcome if they're going to take over. I always wondered what damage water could do for instance. What about microbial.corrsion?

https://en.m.wikipedia.org/wiki/Microbial_corrosion

While it's not necessarily confronting the microbial threat, patents for robots in corrosive marine environments include protective measures.

But obviously the ultimate remedy would be self-repairing and self-replicating machines that could replace damaged parts and areas minus reliance upon humans. Robots built from the microscopic level up so that they imitated biological growth and regeneration would be ideal in terms of potentially avoiding dependence upon their own specialized factories to manufacture replacement components. (Though their bodies would have to include the capacity for acquiring, consuming, sorting, and processing raw materials internally.)

Quote:Is the Earth a good place for intelligent machines or would they be better off on a dead  planet?


Runaway ecophagy by them would ironically convert Earth itself into a biologically dead world.

Outer space, however, might be the eventual habitat if not outright birthplace environment for artilects and any rogue nomadic "machine wildlife". With asteroids and any other significant low-gravity bodies serving as material resources to harvest.

Why Alien Life Will Be Robotic: "This will be especially true in space, which is a hostile place for biological intelligence. The Earth’s biosphere, in which organic life has symbiotically evolved, is not a constraint for advanced AI. Indeed it is far from optimal—interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop insights as far beyond our imaginings as string theory is for a mouse."

Self-replicating machine: In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth.

In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape."

In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.
I get the feeling the overall consensus is that robotic AI will turn against us. Imagine if we just got along, realized our roles and went out and did some good with the universe. But I better keep my paper clips handy just in case I need to put robo out of action. Maybe as we build them, there should be an effort made to design means of incapacitating them.

Anyways, what ever happened to the idea of having nanobots build space stations? Vaguely remember talk about that. If they won’t be building space stations then let’s design them to become robo’s pesky little parasites.
(Jun 23, 2019 03:08 PM)Zinjanthropos Wrote: [ -> ]I get the feeling the overall consensus is that robotic AI will turn against us. Imagine if we just got along, realized our roles and went out and did some good with the universe. But I better keep my paper clips handy just in case I need to put robo out of action. Maybe as we build them, there should be an effort made to design means of incapacitating them.


It probably will be more along the cooperative line of the latter. Although the existing type of tattooers and body mutilators and surgical modifiers throughout history will be the cyborgs of tomorrow. (I guess any surviving poor people will still be baseline human, as well as "plain folk" religions and neo-Luddite cultures.)

The artilect wars are outputted by the technological cult orientations and futurology romantics, which does include legit experts rather than just sci-fi groupies and transhumanism ideologists. But I seriously doubt that there will ever be any motivated and personal goal-oriented, hostile superintelligence accidentally arising in scenarios like "The Terminator" and "The Matrix" or even the perversely good-intentions kind of "I Am Mother".

Quote:Anyways, what ever happened to the idea of having nanobots build space stations? Vaguely remember talk about that. If they won’t be building space stations then let’s design them to become robo’s pesky little parasites.


General purpose, mobile nanobots are really a distant dream. So many challenges to overcome. But certainly possible just as microscopic biological cells came into existence without any engineers having even been involved in their origin. It just seems... a bit crazy to expect them as quickly as some romantics do. But who knows, not every prediction is as tardy as colonies or stations on the Moon before the 1990s or '80s.
I note that biological systems (eg rats) are so far ahead of anything we are likely to be able to program in the near future that we might as well use what we already have (rats). If we could program a thing to have the same savvy as a rat would not the same moral dilemmas arise as if we were actually sending a rat (say) into battle?

If we created an artificial intellect like (say) Emmy Noether and asked her to do 'stuff' would that not be an Emmy Noether intellect without arms, legs, family and friends?
(Jun 24, 2019 12:27 AM)confused2 Wrote: [ -> ]I note that biological systems (eg rats) are so far ahead of anything we are likely to be able to program in the near future that we might as well use what we already have (rats). If we could program a thing to have the same savvy as a rat would not the same moral dilemmas arise as if we were actually sending a rat (say) into battle?

If we created an artificial intellect like (say) Emmy Noether and asked her to do 'stuff' would that not be an Emmy Noether intellect without arms, legs, family and friends?


While acquiring a digital clone of Emmy after her death in 1935 would be very challenging (mission impossible)... All the biological nature of her would probably have to be simulated too, not just an abstract information model limited to reproducing intellectual functions. If that could be replicated in microscopic detail without even understanding how everything worked, then it might be a different situation than what's addressed below (which doesn't assume that level of simulation).

The morality issues... Setting aside how individual humans can contingently become emotionally attached to dolls, cars, etc... There shouldn't be much more ethical difficulties than with wooden puppets. Which can similarly navigate around objects and move their mouths and outwardly act as if they have "showings" going on inside them rather than the usual not even nothingness. (Or so the standard belief goes that stones and rivers and clouds lack manifestations and feelings of themselves as anything, that matter in general lacks experiences -- an absence of everything, as in what follows death.)

John Searle is right when he says that "consciousness" as private experiences (rather than consciousness as external body behavior) has so far only been correlated to brains/nervous systems or is at least known to be affiliated with biological substrates. Which does not rule out an electronic apparatus achieving feelings and manifestations or a clockwork mechanism achieving feelings and manifestations or maybe an arrangement of strategically placed rocks having them. But... "Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially."

It is unknown what particular magical dance to make the components relationally perform to conjure a novelty which can't even be publicly detected once it is claimed by the applicable device or organism itself to have privately arisen. "OMG, what is all this crazy stuff -- these appearances? It's scary, I want the absence of everything to return that I was once snug in! I don't want all this presented evidence of my thinking, doing, seeing, hearing, and feeling anything! I want it to go back to transpiring invisibly, in the dark."

Thus with respect to an embodied AI -- since engineers would only be concerned about mimicking the outward body actions of a robot having private experiences (mere behavior and talk of having feelings, emotions, and manifestations)... Then there is only conventional mechanistic interactions and systematic manipulation of electricity going on which normally lack what the newly emerged phenomenal consciousness above is alarmed about. Robots which are only a vastly more advanced wooden puppet.
Not sure if it’s true but you hear and read a lot about military technology way ahead of civilian. I’ve seen ranges from 10 to 50 years mentioned. If true then for all we know AI is already here or at least our first sight of one will be the Commodore 64 version. If AI is going to exterminate us in time then more than likely it won’t be the civilian model. Better hope the military tech always manages to maintain an advantage.
(Jun 24, 2019 08:06 PM)Zinjanthropos Wrote: [ -> ]Not sure if it’s true but you hear and read a lot about military technology way ahead of civilian. I’ve seen ranges from 10 to 50 years mentioned. If true then for all we know AI is already here or at least our first sight of one will be the Commodore 64 version. If AI is going to exterminate us in time then more than likely it won’t be the civilian model. Better hope the military tech always manages to maintain an advantage


Yeah, you've nailed the backdoor there from whence an artilect war could potentially arise, that even experts often remarkably forget about when skeptically dissing the forecasts of retirees like Hugo de Garis. Rogue countries will keep engineering ever smarter war machines with belligerent tendencies, regardless of how responsible nations try to curb back and limit the capacities of such.
(Jun 24, 2019 08:24 PM)C C Wrote: [ -> ]
(Jun 24, 2019 08:06 PM)Zinjanthropos Wrote: [ -> ]Not sure if it’s true but you hear and read a lot about military technology way ahead of civilian. I’ve seen ranges from 10 to 50 years mentioned. If true then for all we know AI is already here or at least our first sight of one will be the Commodore 64 version. If AI is going to exterminate us in time then more than likely it won’t be the civilian model. Better hope the military tech always manages to maintain an advantage


Yeah, you've nailed the backdoor there from whence an artilect war could potentially arise, that even experts often remarkably forget about when skeptically dissing the forecasts of retirees like Hugo de Garis. Rogue countries will keep engineering ever smarter war machines with belligerent tendencies, regardless of how responsible nations try to curb back and limit the capacities of such.
 
A responsible nation is only a subjective term. The  most passive nation on Earth in someone else's eyes....well you see where I'm going. Should be good times ahead for the hacker and espionage business. 


When govt's conduct military strikes I sometimes think it's to do with tech. In the age of AI will it force us to condone strikes against others. Hell, AI might not have to do anything except be thought of as a threat,  we'll end up killing each other before they get around to it.