Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Can a robot learn right from wrong?

#1
Question  Leigha Offline
Interesting read. I'm always enthusiastic about new discoveries, but morality and ethics are often built upon subjective ''truths.'' That said, these robots are going to be used in confined spaces for specific tasks...but still. Do you think that this is a good idea? 

https://www.newscientist.com/articl...rap-robot-paralysed-by-choice-of-who-to-save/

When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces,” says Wendell Wallach, author of Moral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.
Reply
#2
C C Offline
(Jul 10, 2019 04:29 AM)Leigha Wrote: Do you think that this is a good idea?


Algorithmic learning and embodied interactive learning are yielding some impressive results that surpass the old "have all the programming figured-out from the start, provided by the creators" approach.

In a coincidental pseudo-historic parallelism, Abrahamic mythology or cultural metaphoric story features beings that are ready-made from the start with a built-in, predictiable moral universality (i.e., supposed to be perfect). Yet a percentage of them go haywire and become demons. In addition, their Designer seems even dissatisfied with the limitations of the ones that don't rebel. And thereby introduces raw, flexible and imperfect creatures -- shaped by their environmental encounters, in some kind of hope that they'll incrementally develop over the ages toward a more satisfactory ideal being.

Author of article Wrote:But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.

Vaguely equates to the classic "does not compute" cliché of 1960s entertainment when concerning robots and blinking-light computers, where such either shuts down or starts smoking and frying internally when confronted with a paradoxical dilemma.

In complex decision-making quandaries, there surely could be young humans (if not also older ones) who rookie-wise would either border on or outright do the same. But part of what's different is the threat of consequences (including personal embarrassment, shame, or guilt even if there are no additional witnesses) driving a person to make their own rule of just choosing before time runs down, even it turns out to be the poorer choice. Having the ability to impulsively, randomly act without spending any time reflecting at all is in the human toolkit.

Plus, if all the proxy automatons the robot is supposed to rescue are homogeneous in terms of priority or value, then that's a significant factor missing from an actual scenario. For instance, in the real world the mix of a child or senior versus a fit adult in the choices would shorten the decision time, if there's an unavoidable dilemma. Context could enter as well, like where preventing a war or disaster is dependent upon securing the safety of an important or critical individual over the loss of others in a group (highly exaggerated example used to quickly make the point).

Humans certainly aren't perfect themselves in terms of having flawless cognition or apprehension of a scene. We don't have omniscient understanding, especially in terms of all the facts and information that aren't immediately, empirically available in a contingent situation. ("Who are these potential victims, what the hell is going on here, is it truly what it seems?").

So designers, legislators, courts of justice, and the public may likewise have to mitigate their high expectations of robot moral performance being equivalent to that of angels. OTOH, a robot can easily sacrifice itself[*] in a high-risk attempt to save a human with far more machine-like reliability than many people could, so "angel standards" shouldn't in the early going be utterly discounted from being approximately obtainable.

- - - footnote - - -

[*] Which is not to dismiss the past having produced some pretty fine "human robot" myrmidons willing to sacrifice their lives in an instant for a principle or upon any arbitrary whim issued by a leader, especially in the East.
Reply
#3
Syne Offline
I am Mother is a good case for their ethics being too much about numbers, to the exclusion of real moral accountability.
10,000 humans is a fair exchange for any 9,000, even if you have to murder the 9,000.
Reply
#4
Leigha Offline
That movie looks good, and received positive reviews. Hmm. I'm going to check it out.
Reply
#5
confused2 Offline
John Donne: Poems "For whom the bell tolls" [ https://www.gradesaver.com/donne-poems/s...bell-tolls ]

"No man is an island, entire of itself; every man is a piece of the continent, a part of the main. If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend's or of thine own were: any man's death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bells tolls; it tolls for thee."

Humans are linked by father to son, mother to daughter, father to daughter, mother to son, grandparents to grandchildren, generation to generation.

By the time you have a robot smart enough that could be part of the human race it will also (likely) be smart enough to work out that it isn't part of the human race. There are some signs that this could be ok and other signs (Synes?) that we are a voracious species.
Reply
#6
stryder Offline
How would you as a human handle an Ethical Zombie situation?

Truth be told if you are ever involved in a bad accident or catastrophic event, or just observe one, it's likely you'll suffer from conflictions on what to do in the time frame after. It's basically "Shock" and can cause people to freeze in doing anything, and it can have continued ramifications later on in the form of PTSD (Constantly reliving how the event could of been handled)

The thing is with us humans however is if we can spot the problem we can sometimes come up with a solution. To stop two people (or robots) falling down a hole, you could run to one while yelling "Stop Danger" in the direction of the other (Hoping that sound will carry and the alertness of whoever that's heading to the hole has been informed) Furthermore there is the game theory angle, if there's a bystander nearby you could tell them to warn the other (Of course sometimes explaining in a timely fashion what needs to be done and actually having someone carry it out isn't easily conveyed).
Reply
#7
Secular Sanity Offline
(Jul 13, 2019 06:02 PM)stryder Wrote: How would you as a human handle an Ethical Zombie situation?

Zombie ethics? Interesting, Stryder.

ZOMBIE-INFESTED VIRTUAL WORLD REVEALS OUR ETHICAL BLIND SPOTS

Quote:So, the world has been overrun by zombies, and to have any chance of survival, you have to kill people every now and then. Are you comfortable with that?

A new analysis of comments posted on the forum of a popular online game suggests the answer is: Not really.

…In this virtual world (and unlike many others), death is a big deal, in which character must “start again from scratch.” To avoid this fate, users “can choose to either attack other characters, or team up with them in order to increase their own chances of survival.”

…This means players must regularly make moral decisions, including whether to kill competing characters, or rob them of their supplies (which means they will, in all likelihood, die). An online forum has been established in which users “ask for or receive comments on whether their actions were justified or ethical.”

“Even in the most drastic conditions, survivors appeared reluctant to kill another human for purely utilitarian reasons,” the researchers add. “In contrast, not directly witnessing the death of another human—even if virtual—seemed to abolish this natural inhibition, at least partially.”

So, if people’s online behavior is indicative—and the researchers believe it is—these results paint a mixed portrait of our core ethical principles. While it’s unclear whether its roots are cultural or biological, the command “Thou shalt not kill” seems to be hard-wired into our brains.

On the other hand, the results suggest we are much more comfortable with delayed destruction. Among these players, at least, guilt decreased dramatically if the deadly consequences of their actions only take shape once they had left the scene.

That indeed sounds like the real world.
Reply
#8
Leigha Offline
(Jul 13, 2019 06:02 PM)stryder Wrote: How would you as a human handle an Ethical Zombie situation?

Truth be told if you are ever involved in a bad accident or catastrophic event, or just observe one, it's likely you'll suffer from conflictions on what to do in the time frame after.  It's basically "Shock" and can cause people to freeze in doing anything, and it can have continued ramifications later on in the form of PTSD (Constantly reliving how the event could of been handled)

The thing is with us humans however is if we can spot the problem we can sometimes come up with a solution.  To stop two people (or robots) falling down a hole, you could run to one while yelling "Stop Danger" in the direction of the other (Hoping that sound will carry and the alertness of whoever that's heading to the hole has been informed)  Furthermore there is the game theory angle, if there's a bystander nearby you could tell them to warn the other (Of course sometimes explaining in a timely fashion what needs to be done and actually having someone carry it out isn't easily conveyed).

I think that's the great question, will robots eventually become capable of understanding their choices? Will they handle ethical dilemmas as we do, or will they simply be limited to just reacting, as they are now? 

Time will tell... Cool
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  ‘Brainless’ robot can navigate complex obstacles C C 0 74 Sep 9, 2023 05:21 PM
Last Post: C C
  Article Some neural networks learn language like humans C C 0 69 May 24, 2023 02:31 PM
Last Post: C C
  Article Training machines to learn more like humans do C C 0 68 May 9, 2023 11:27 PM
Last Post: C C
  Why AI must learn to forget C C 1 148 Nov 24, 2022 12:20 AM
Last Post: confused2
  Neurons in a dish learn to play Pong. What does it mean? Ethical concerns? C C 0 76 Feb 23, 2022 07:45 PM
Last Post: C C
  Experimental robot surgeon can operate without human help C C 1 99 Jan 29, 2022 10:22 PM
Last Post: confused2
  It takes a lot of energy for machines to learn Leigha 4 294 Jan 15, 2021 02:06 PM
Last Post: confused2
  Uncanny humanoid robot works at a Gov office in Russia + NLP is chasing wrong goal C C 1 149 Aug 4, 2020 07:42 AM
Last Post: Yazata
  What artificial brains can teach us about how real brains learn C C 1 325 Oct 1, 2017 11:22 PM
Last Post: Syne
  Artificial intelligence researchers must learn ethics C C 0 451 Sep 1, 2017 05:39 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)