Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Safety engineering for AI

#1
C C Offline
http://www.npr.org/sections/13.7/2017/10...ppen-to-us

EXCERPT: [...] I'm optimistic that we can thrive with advanced AI as long as we win the race between the growing power of our technology and the wisdom with which we manage it. But this requires ditching our outdated strategy of learning from mistakes. That helped us win the wisdom race with less powerful technology: We messed up with fire and then invented fire extinguishers, and we messed up with cars and then invented seat belts. However, it's an awful strategy for more powerful technologies, such as nuclear weapons or superintelligent AI — where even a single mistake is unacceptable and we need to get things right the first time. Studying AI risk isn't Luddite scaremongering — it's safety engineering. When the leaders of the Apollo program carefully thought through everything that could go wrong when sending a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission. So what can we do to keep future AI beneficial? Here are are four steps that have broad support from AI researchers...

MORE: http://www.npr.org/sections/13.7/2017/10...ppen-to-us
Reply
#2
RainbowUnicorn Offline
(Oct 9, 2017 05:44 AM)C C Wrote: http://www.npr.org/sections/13.7/2017/10...ppen-to-us

EXCERPT: [...] I'm optimistic that we can thrive with advanced AI as long as we win the race between the growing power of our technology and the wisdom with which we manage it. But this requires ditching our outdated strategy of learning from mistakes. That helped us win the wisdom race with less powerful technology: We messed up with fire and then invented fire extinguishers, and we messed up with cars and then invented seat belts. However, it's an awful strategy for more powerful technologies, such as nuclear weapons or superintelligent AI — where even a single mistake is unacceptable and we need to get things right the first time. Studying AI risk isn't Luddite scaremongering — it's safety engineering. When the leaders of the Apollo program carefully thought through everything that could go wrong when sending a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission. So what can we do to keep future AI beneficial? Here are are four steps that have broad support from AI researchers...

MORE: http://www.npr.org/sections/13.7/2017/10...ppen-to-us

Quote: But this requires ditching our outdated strategy of learning from mistakes.

sociology 101
though change is a constant humans in a structure of formal power networks oppose change.
Thus change as a process of innitiattions comes only after the event as a form of punitive aquisition of knowledge.
This appears to be mostly symbolic given the nature of the process is the primary facet of behavioural change for the structural flaw.
Reply
#3
stryder Offline
The most significant flaw is itself human.

On the one hand we can look at all the things we do right, the capacity to nurture, the ability to strive to be better, to be empathetic to life around us, but there is always those that exist that were mistreated, that are more wired for destruction than creation, that have no empathy to give.

This is the problem with any systems we apply to AI, the human factor of how those systems are applied...

Are those humans vetted? Did they submit to psychological analysis before-hand? What are literally their goals and motivations?

After all if they are to be the parents of a AI foster child, are they fit to teach it values, or will they attempt to sculpture it based upon their own questionable motives?

While governments might say "Okay we won't create fully autonomous weapons", there will always be those that dabble in doing what is forbidden and they by definition are in this instance the true danger.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Workers at Bezos’ rocket company allege sexism, safety risks (issues in engineering) C C 2 119 Oct 3, 2021 10:44 PM
Last Post: Syne



Users browsing this thread: 1 Guest(s)