Scivillage.com Casual Discussion Science Forum

Full Version: AI that cheats, lies, plans, & mysteriously predicts: How close are we to Skynet?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://www.cracked.com/blog/5-creepy-th...g-its-own/

EXCERPT: Artificial intelligence has been the bogeyman of science fiction since before it even existed for real. [...] AI research keeps making big advances quietly behind the scenes. And it's absolutely starting to get weird.

(5) This one came up on a project from a Stanford and Google research team. They were using a neural network to convert aerial photos into maps. The AI was very good at its job. Almost ... too good. So the researchers checked the data and found that the AI was cheating [...] and did it in a way that the humans wouldn't easily notice.

(4) There was a study involving an AI designed to land a simulated plane using as little force as possible ... A soft landing earned a perfect score, and the AI was supposed to learn a way to get that score. What could go wrong? Well, the AI realized it could cheat ... Hey, it's the results that matter, right?

(3) [...] Google researchers designed an Atari-style game in which AIs were tasked with gathering "apples" for points. How fun! Oh, and they could also shoot each other with beams, which temporarily removed other players from the game. And as you can guess [...] the AI went full-on Lord Of The Flies and rampantly knocked each other out. ... Not that the bots are incapable of cooperating for the greater good. ... in this next simulation ... they realized that cooperation made it easier to corner prey.

(2) [...Facebook..] just wanted to see if the bots could learn the skills they needed to successfully negotiate on their own. The researchers even tested them on human subjects who didn't even know they were interacting with AIs. The bots learned their task very quickly. In fact, it didn't take long for them to negotiate better deals than their human counterparts. How? By lying. Although Facebook's researchers didn't program the bots to lie ... the software quickly figured out what salespeople have known since the dawn of time: Lies are just more profitable. ... Then the team had to alter the code entirely when the bots unexpectedly created their own language and began communicating with each other through it...

(1) I don't want to indulge in fear-mongering. Technological alarmists almost always wind up looking like idiots decades later (almost always). The problem is that, by its very nature, AI is supposed to do its own thinking, to grow beyond its original design. [...] So even when an AI project exceeds expectations, there's a creepy moment when scientists realize they aren't sure how it did it. One example involves using an AI known as Deep Patient to analyze medical record data from about 700,000 patients at New York's Mount Sinai Hospital. The AI proved to be very good at predicting the onset of various illnesses. ... it's cool that Deep Patient is good at this. But researchers have approximately zero clues as to why it's so good at it, and it doesn't help that the AI essentially taught itself to make these predictions. According to one researcher involved in the project, "We can build these models, but we don't know how they work." (MORE - details)