Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Science has outgrown the human mind + How alien will robot morality be?

#1
C C Offline
Science has outgrown the human mind and its limited capacities
https://www.theatlantic.com/science/arch...ed/524136/

EXCERPT: Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year. Meanwhile, the quality of the scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.

The twin challenges of too much quantity and too little quality are rooted in the finite neurological capacity of the human mind. Scientists are deriving hypotheses from a smaller and smaller fraction of our collective knowledge and consequently, more and more, asking the wrong questions, or asking ones that have already been answered. Also, human creativity seems to depend increasingly on the stochasticity of previous experiences – particular life events that allow a researcher to notice something others do not. Although chance has always been a factor in scientific discovery, it is currently playing a much larger role than it should.

One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated? I believe it can, using an approach that we have known about for centuries. The answer to this question can be found in the work of Sir Francis Bacon, the 17th-century English philosopher and a key progenitor of modern science....

MORE: https://www.theatlantic.com/science/arch...ed/524136/



Creating robots capable of moral reasoning is like parenting
https://aeon.co/essays/creating-robots-c...-parenting

EXCERPT: Intelligent machines, long promised and never delivered, are finally on the horizon. [...] They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human? Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

[...] In fact, we might discover that intelligent machines think about everything [...] in ways that are alien us. You don’t have to imagine some horrible science-fiction scenario, where robots go on a murderous rampage. It might be something more like this: imagine that robots show moral concern for humans, and robots, and most animals… and also sofas. They are very careful not to damage sofas, just as we’re careful not to damage babies. We might ask the machines: why are you so worried about sofas? And their explanation might not make sense to us [...]
Reply
#2
Carol Offline
Have you seen the British TV series "Humans"?  It is about the development of robots that look human and are made to serve humans.  Five of them were made by a rogue scientist and have consciousness and they are a threat because they react to abuse in a human self-defensive way.  The scientist's son dies and the scientist gives him life by giving him a computer brain, and he is tied to the robots created by his father. The series brings up many questions.   Pure logic doesn't work well humans.  We like to eat what we want to eat, and we can be resistant to taking prescribed medications, and resistant to exercise and studying the consequences of human behavior and making logical decisions.  We seem to like our fantasies and indulgences more than logic.  

But what is moral for a robot?

I define moral as a matter of cause and effect, but humans base decisions more on feelings than logic.  That means humans are not exactly logical, therefore, they are not exactly moral.  Or, what might feelings have to do with moral decisions?  We care about ourselves and others and this leads to decisions based on feelings.  Could this lead to less or more moral decisions?   What feelings would a robot have?
Reply
#3
Carol Offline
On second thought, if artificial intelligent can give us better and cheaper medicine or better information for handling our diminishing resources and growing populations, I don't think can develop it fast enough. It would be nice to have that yesterday.

Those of us who suffer chronic health problems would love to have better and cheaper medical care. Medical care that is based on our needs, not someone's profit. Medical care that is based on better information about why we are experiencing a medical problem, instead of our medical care being an experiment done with the hope it will work, and without the possibility that it could lead to much more severe problems. Drugs based on need, not profit. Yes, yes, let us advance this technology.
Reply
#4
C C Offline
I might distinguish the two in this way...

The "moral domain" requires principles / rules to revolve around, as opposed to arbitrary impulses. Standards which pertain to a social group or had their original genesis in considerations of such. Maxims which are universal to that community unless a prescribed code of conduct itself allows / dispenses some circumstantial variations and exceptions for itself. Sometimes an element can be suggested advice or wisdom rather than the weight of strict law with a penalty.

"Feelings" are personal sensitivities that can either reinforce conformity to privately held canon and publicly authorized standards or cause contingent deviation from / violation of those tenets. The unstable nature of feelings, emotions, and desires thereby provide a "wiggle-room" that can disrupt robotic-like adherence to moral frameworks and guidelines.

One end result of the two converging is the demotion of moral affairs from idealized expectations to "borders that one should not stray too far from" or "shorelines which should not be lost to sight by venturing into deep waters". (Beyond this point be not so much dragons awaiting as thou becoming the monster.)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article What would it take to imagine a truly alien Alien? C C 0 58 Apr 22, 2023 06:39 PM
Last Post: C C
  The Leaning Tower of Morality C C 0 187 Dec 26, 2017 07:58 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)