(Jul 10, 2019 04:29 AM)Leigha Wrote: [ -> ]Do you think that this is a good idea?
Algorithmic learning and embodied interactive learning are yielding some impressive results that surpass the old "have all the programming figured-out from the start, provided by the creators" approach.
In a coincidental pseudo-historic parallelism, Abrahamic mythology or cultural metaphoric story features beings that are ready-made from the start with a built-in, predictiable moral universality (i.e., supposed to be perfect). Yet a percentage of them go haywire and become demons. In addition, their Designer seems even dissatisfied with the limitations of the ones that don't rebel. And thereby introduces raw, flexible and imperfect creatures -- shaped by their environmental encounters, in some kind of hope that they'll incrementally develop over the ages toward a more satisfactory ideal being.
Author of article Wrote:But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.
Vaguely equates to the classic "does not compute" cliché of 1960s entertainment when concerning robots and blinking-light computers, where such either shuts down or starts smoking and frying internally when confronted with a paradoxical dilemma.
In complex decision-making quandaries, there surely could be young humans (if not also older ones) who rookie-wise would either border on or outright do the same. But part of what's different is the threat of consequences (including personal embarrassment, shame, or guilt even if there are no additional witnesses) driving a person to make their own rule of just choosing before time runs down, even it turns out to be the poorer choice. Having the ability to impulsively, randomly act without spending any time reflecting at all is in the human toolkit.
Plus, if all the proxy automatons the robot is supposed to rescue are homogeneous in terms of priority or value, then that's a significant factor missing from an actual scenario. For instance, in the real world the mix of a child or senior versus a fit adult in the choices would shorten the decision time, if there's an unavoidable dilemma. Context could enter as well, like where preventing a war or disaster is dependent upon securing the safety of an important or critical individual over the loss of others in a group (highly exaggerated example used to quickly make the point).
Humans certainly aren't perfect themselves in terms of having flawless cognition or apprehension of a scene. We don't have omniscient understanding, especially in terms of all the facts and information that aren't immediately, empirically available in a contingent situation. ("
Who are these potential victims, what the hell is going on here, is it truly what it seems?").
So designers, legislators, courts of justice, and the public may likewise have to mitigate their high expectations of robot moral performance being equivalent to that of angels. OTOH, a robot can easily sacrifice itself[*] in a high-risk attempt to save a human with far more machine-like reliability than many people could, so "angel standards" shouldn't in the early going be utterly discounted from being approximately obtainable.
- - - footnote - - -
[*] Which is not to dismiss the past having produced some pretty fine "human robot" myrmidons willing to sacrifice their lives in an instant for a principle or upon any arbitrary whim issued by a leader, especially in the East.