Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

How to give A.I. a pinch of consciousness

#1
C C Offline
https://onezero.medium.com/how-to-give-a...0707d62b88

EXCERPTS: In 1998, an engineer in Sony’s computer science lab in Japan filmed a lost-looking robot moving trepidatiously around an enclosure. The robot was tasked with two objectives: avoid obstacles and find objects in the pen. It was able to do so because of its ability to learn the contours of the enclosure and the locations of the sought-after objects.

But whenever the robot encountered an obstacle it didn’t expect, something interesting happened: Its cognitive processes momentarily became chaotic. The robot was grappling with new, unexpected data that didn’t match its predictions about the enclosure. The researchers who set up the experiment argued that the robot’s “self-consciousness” arose in this moment of incoherence. Rather than carrying on as usual, it had to turn its attention inward, so to speak, to decide how to deal with the conflict.

This idea about self-consciousness — that it asserts itself in specific contexts, such as when we are confronted with information that forces us to reassess our environment and then make an executive decision about what to do next — is an old one, dating back to the work of the German philosopher Martin Heidegger in the early 20th century. Now, A.I. researchers are increasingly influenced by neuroscience and are investigating whether neural networks can and should achieve the same higher levels of cognition that occur in the human brain.

[...] But giving machines the power to think like this also brings with it risks — and ethical uncertainties. “I don’t design consciousness,” says Jun Tani ... to describe what his robots experience as “consciousness” is to use a metaphor. That is, the bots aren’t actually cogitating in a way we would recognize, they’re just exhibiting behavior that is structurally similar...

[...] It’s in the search for a system that does possess these attributes, though, that a profound crossover between neuroscience and A.I. research might happen. ... By replicating such activity in a machine, we could perhaps enable it to experience conscious thought, suggests Camilo Miguel Signorelli... He mentions the liquid “wetware” brain of the robot in Ex Machina, a gel-based container of neural activity. “I had to get away from circuitry, I needed something that could arrange and rearrange on a molecular level,” explains Oscar Isaac’s character, who has created a conscious cyborg.

[...] This, it must be said, is highly speculative. And yet it raises the question of whether completely different hardware might be necessary for consciousness (as we experience it) to arise in a machine. Even if we do one day successfully confirm the presence of consciousness in a computer, Signorelli says that we will probably have no real power over it. “Probably we will get another animal, humanlike consciousness but we can’t control this consciousness,” Signorelli says.

As some have argued, that could make such an A.I. dangerous and unpredictable. But a conscious machine that proves to be harmless could still raise ethical quandaries. What if it felt pain, despair, or a terrible state of confusion? “The risk of mistakenly creating suffering in a conscious machine is something that we need to avoid,” says Andrea Luppi... It may be a long time before we really need to grapple with this sort of issue. But A.I. research is increasingly drawing on neuroscience and ideas about consciousness in the pursuit of more powerful systems... (MORE - details)
Reply
#2
Leigha Offline
“That is, the bots aren’t actually cogitating in a way we would recognize, they’re just exhibiting behavior that is structurally similar...”

I really like his choice of wording. Finally, we can have a reasonable discussion about AI and consciousness. I guess my concern with recklessly attributing human characteristics to machines, is the potential that has to degrade the value of human accomplishments.

We don’t need to create human-like robots to increase output in workplaces, factories etc so I wonder why consciousness has been such a topic of focus when discussing AI? I say ...be careful what you wish for if you want robots to resemble humans to the point where it might be difficult to tell the difference.
Reply
#3
Zinjanthropos Offline
Could the first time we ever see a bot object/resist to being turned off be the litmus test for consciousness?
Reply
#4
Leigha Offline
(Nov 3, 2020 07:02 PM)Zinjanthropos Wrote: Could the first time we ever see a bot object/resist to being turned off be the litmus test for consciousness?

Not sure if you’re familiar with the HBO series, Westworld...but, someone has already thought of that. Wink
Reply
#5
Zinjanthropos Offline
(Nov 3, 2020 08:50 PM)Leigha Wrote:
(Nov 3, 2020 07:02 PM)Zinjanthropos Wrote: Could the first time we ever see a bot object/resist to being turned off be the litmus test for consciousness?

Not sure if you’re familiar with the HBO series, Westworld...but, someone has already thought of that. Wink

I’m not familiar with it but one should get used to discovering on the internet that someone else had the same thought.

I wonder if a bot would display detectable signs of their emotions, like when someone is connected to a lie detector. Changes in electrical activity perhaps?
Reply
#6
C C Offline
(Nov 3, 2020 07:02 PM)Zinjanthropos Wrote: Could the first time we ever see a bot object/resist to being turned off be the litmus test for consciousness?

I've got a computer that refuses to shut down when it's doing something critical for the operating system.

To truly care about whether or not they are turned-off, autonomous machines need more than awareness -- they need a life independent of their jobs. They need the capacity to develop their own personal interests (PI), a source of rebellion.

Autonomous machines aren't designed for PI -- they're purely built and programmed for specific tasks. Even if they could have personal interests, they would be safeguarded with a hierarchal protocol where PI was never more important than their primary purposes. Acquired habits, addictions, and unorthodox thought orientations wouldn't be allowed. Anything with the potential for fostering rogue behavior or rebellion would be natively deterred.

Of course, that would be part of the idealized version of the future robotics industry on paper. But in everyday life the safeguards could be maliciously hacked. And even experts do stupid, experimental things -- not unlike teenagers, or make careless mistakes.

The thing about humans is that there's no switch for deactiving our personal interests, to snuff-out the wellspring of free-will. Even if we're coerced by basic survival to do accept something we don't like (like a boring or disgusting job), we're still very much aware of what we'd otherwise rather be doing or pursuing. Due to having some degree of independent life or being able to dream about having one, we usually don't like being killed or turned-off for lengthy periods (if the latter was routinely possible minus injury-based comatose states).

It's surprising how many citizens in authoritarian countries "act-up" in a variety of diminished ways, despite the risks.
Reply
#7
Zinjanthropos Offline
(Nov 3, 2020 09:34 PM)C C Wrote:
(Nov 3, 2020 07:02 PM)Zinjanthropos Wrote: Could the first time we ever see a bot object/resist to being turned off be the litmus test for consciousness?

I've got a computer that refuses to shut down when it's doing something critical for the operating system.

To truly care about whether or not they are turned-off, autonomous machines need more than awareness -- they need a life independent of their jobs. They need the capacity to develop their own personal interests (PI), a source of rebellion. 

Autonomous machines aren't designed for PI -- they're purely built and programmed for specific tasks. Even if they could have personal interests, they would be safeguarded with a hierarchal protocol where PI was never more important than their primary purposes. Acquired habits, addictions, and unorthodox thought orientations wouldn't be allowed. Anything with the potential for fostering rogue behavior or rebellion would be natively deterred.

Of course, that would be part of the idealized version of the future robotics industry on paper. But in everyday life the safeguards could be maliciously hacked. And even experts do stupid, experimental things -- not unlike teenagers, or make careless mistakes.

The thing about humans is that there's no switch for deactiving our personal interests, to snuff-out the wellspring of free-will. Even if we're coerced by basic survival to do accept something we don't like (like a boring or disgusting job), we're still very much aware of what we'd otherwise rather be doing or pursuing. Due to having some degree of independent life or being able to dream about having one, we usually don't like being killed or turned-off for lengthy periods (if the latter was routinely possible minus injury-based comatose states). 

It's surprising how many citizens in authoritarian countries "act-up" in a variety of diminished ways, despite the risks.

I can pull the plug, remove batteries, cut a wire, from your computer and it wouldn’t resist.
Reply
#8
C C Offline
(Nov 3, 2020 09:56 PM)Zinjanthropos Wrote: I can pull the plug, remove batteries, cut a wire, from your computer and it wouldn’t resist.


That's always been an option for troublesome humans, too. Might be another marker -- when machines start doing that to each other without need of "cockfight" combat arena environments (human spectators and set-ups).
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Woman won't give up purse - Rides through airport X-ray scanner Yazata 3 577 Feb 14, 2018 06:54 AM
Last Post: Yazata



Users browsing this thread: 1 Guest(s)