Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Why flat-Earthers are a clear & present threat to an AI-powered society

#1
Question  C C Offline
https://thenextweb.com/news/why-flat-ear...ed-society

EXCERPT: . . . There are almost certainly people from every walk of life, in every industry, at every school, in every police precinct, and working for just about every news outlet who believe things about artificial intelligence that are simply not true.

Let’s make a short list of things that are demonstrably untrue that the general public tends to believe:
  • Big tech is making progress mitigating racial bias in AI
  • AI can predict crime
  • AI can tell if you’re gay
  • AI writing/images/paintings/videos/audio can fool humans
  • AI is on the verge of becoming sentient
  • Having a human in the loop mitigates bias
  • AI can determine if a job candidate will be successful
  • AI can determine gender
  • AI can tell what songs/movies/videos/clothes you’ll like
  • Human-level self-driving vehicles exist
And that list could go on and on. There are thousands of useless startups and corporations out there running basic algorithms and claiming their systems can do things that no AI can do.

Those that aren’t outright pedaling snake oil often fudge statistics and percentages to mislead people concerning how efficacious their products are. These range from startups claiming they’ll be able to let you speak with your dead loved ones by feeding an AI system all their old texts and then creating a chatbot that imitates them, all the way to multi-billion dollar big tech outfits such as PredPol.

The people running these companies are either ignorant or disingenuous hucksters who know they’re using the same technology that, for example, IBM’s Watson uses to power the chat bots that pop up when you go to your bank’s website. “How can we help?”

And it’s just as bad in academia. When researchers claim that a text generator or style imitation algorithm can “fool” people with its text or “paintings,” they’re not employing expert opinions, they’re asking Mechanical Turk Workers what they think.

[...] So, why do entire governments endorse shitty products such as those pedaled by the snake oil salespeople at PredPol and ShotSpotter?

Why does Stanford continue to support research from a team that claims it can use AI to tell if a person is gay or liberal?

Why is Tesla’s Full Self Driving and Autopilot software so popular when it clearly and demonstrably is nowhere near safely self-driving or autopiloting a vehicle?

Why do so many people believe that GPT-3 can fool humans?

Because trillions upon trillions of dollars are at stake and because these systems all provide a clear benefit to their users aside from their advertised use cases.

[...] It’s the same with companies that use hiring software to determine who the best candidates are.

AI can’t tell you who the best candidate is. What it can do, is reinforce your existing biases by taking your records on candidates that have traditionally done well and applying them as a filter over potential candidates. Thus, the ultimate goal of these systems is to empower a business to choose the candidates it wants and, if there are accusations of bias, HR can just blame the algorithm.

But there’s a pretty good chance that most people, even many who develop AI technologies themselves, believe at least one of these big lies about what AI can and can’t do. And, to one degree or another, we’re all being exploited because of those misguided beliefs.

If we should be afraid of electing flat-Earthers, or having pro-diseasers in our schools, or cops on the force who believe the US election was rigged by democrats who literally eat babies, then we should be absolutely terrified about what’s happening in the world of AI.

After all, if the flat-Earth and antivaxx movements are growing exponentially year over year, what possible hope could we have of stalling out the mass-held belief that AI can do all sorts of things that are demonstrably impossible? (MORE - missing details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How Will Deep Learning Advance Society? C C 1 370 Jan 1, 2017 07:28 PM
Last Post: Magical Realist
  Fun with DNA + Serious security threat to many Internet users highlighted C C 0 391 Aug 10, 2016 02:06 AM
Last Post: C C
  The White House Is Finally Prepping for an AI-Powered Future C C 0 387 May 31, 2016 01:59 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)