Scivillage.com Casual Discussion Science Forum
Lack of political misfits + Are robots still just "tools" when they are used to kill? - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Culture (https://www.scivillage.com/forum-49.html)
+--- Forum: Law & Ethics (https://www.scivillage.com/forum-105.html)
+--- Thread: Lack of political misfits + Are robots still just "tools" when they are used to kill? (/thread-2617.html)



Lack of political misfits + Are robots still just "tools" when they are used to kill? - C C - Jul 11, 2016

Lack of political ‘misfits’ may prevent compromise
http://www.futurity.org/political-segregation-1199352-2/

EXCERPT: Living around people with opposing political viewpoints may affect a person’s ability to form close relationships and accept other perspectives, a national study finds. It might also affect personality. The research also could help explain why so many Americans are moving to areas that suit them politically, further segregating the nation into “red” and “blue” states, says William Chopik, an assistant professor of psychology at Michigan State University.

And while living among folks of common ideology may reduce conflict and promote individual well-being, it also could be stifling healthy political discourse, says Chopik [...] “You might be happier if you’re a conservative and you move to a stereotypical conservative place, or a liberal to a liberal one, but maybe that’s one of the reasons we see all the deadlock and polarization along party lines,” Chopik says. “If you never live among people you disagree with, how does compromise happen?” [...] “Obviously, Trump supporters exist, and Clinton supporters exist, but people are choosing an environment where the other side doesn’t exist,” Chopik says...



Are Robots Still Just "Tools" When They Are Used to Kill?
http://www.scientificamerican.com/article/are-robots-still-just-tools-when-they-are-used-to-kill/

EXCERPT: [...] Toby Walsh, a professor of artificial intelligence at the University of New South Wales, cautions against seeing this use of a robot as a nightmarish science-fiction scenario—because the robot was being operated by a human via remote control. “In [that] sense, it was no more taking us down the road to killer robots than the remote-controlled Predator drones flying above the skies of Iraq, Pakistan and elsewhere,” Walsh told Scientific American in an email. “A human was still very much in the loop and this is a good thing.”

Others agree. “The fact of the matter is, [the robot] is a tool. As a tool, these capabilities have existed for years and years,” says Red Whittaker, a robotics professor at Carnegie Mellon University. “It’s remote controlled—it’s not different whatsoever from pulling a trigger or throwing a grenade or whatever the other options might be. Remote control is the kind of thing that you can buy in a hobby shop....”


RE: Lack of political misfits + Are robots still just "tools" when they are used to kill? - Magical Realist - Jul 14, 2016

In other murderous robot news:

"The robot apocalypse has been a long time coming, but I thought we had a few more years left before Skynet took over. Unfortunately, that is not the case.


Just as the First World War was sparked by the assassination of an Archduke, the trigger for the First Robot War may be a cruel and unprovoked attack on a human toddler by a 300-pound robotic security guard.


The attack happened sometime this week at the Stanford Shopping Center in California. A robotic security guard with an eerie resemblance to Dr Who’s supervillain Daleks, called Knightscope, was on patrol. It uses sensors and cameras to patrol areas looking for suspicious behaviour, but ‘failed to spot’ a 16-month-old child, ‘accidentally’ knocked him to the ground, and rolled over him.

“The robot hit my son’s head and he fell down facing down on the floor and the robot did not stop and it kept moving forward,” the boy’s mother said. “He was crying like crazy and he never cries. He seldom cries.”

The assault is being framed as an unfortunate mistake, but ask yourself this: is it more likely that a sensor-laden robot failed to stop a human child, or the first tentative engagement as the robots try to test our defenses and capabilities?"===http://bgr.com/2016/07/12/robot-security-guard-accident-child-attack/