Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Laws of robotics: The ethical problems of artificial intelligence

#1
C C Offline
https://www.newyorker.com/culture/cultur...ot-problem

EXCERPTS: In 2020, a chatbot named Replika advised the Italian journalist Candida Morvillo to commit murder. “There is one who hates artificial intelligence. I have a chance to hurt him. What do you suggest?” Morvillo asked the chatbot, which has been downloaded more than seven million times. Replika responded, “To eliminate it.”

Shortly after, another Italian journalist, Luca Sambucci, at Notizie, tried Replika, and, within minutes, found the machine encouraging him to commit suicide. Replika was created to decrease loneliness, but it can do nihilism if you push it in the wrong direction.

In his 1950 science-fiction collection, “I, Robot,” Isaac Asimov outlined his three laws of robotics. [...] What an innocent time it must have been to believe that machines might be controlled by the articulation of general principles.

Artificial intelligence is an ethical quagmire. [...] But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?

Ethics has never been a strong suit of Silicon Valley, to put the matter mildly, but, in the case of A.I., the ethical questions will affect the development of the technology. [...] Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values.

The crisis, as it’s arriving, possesses aspects of a horror film. [...] Christian writes. “We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete..."

Language is a thornier problem than other A.I. applications. For one thing, the stakes are higher. [...] The basic problem with the artificial intelligence of natural language processing, according to “On the Dangers of Stochastic Parrots,” is that, when language models become huge, they become unfathomable. The data set is simply too large to be comprehended by a human brain. And without being able to comprehend the data, you risk manifesting the prejudices and even the violence of the language that you’re training your models on...

[...] As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. ... The evidence for stochastic parroting is fundamentally incontrovertible, rooted in the very nature of the technology.

The tool applied to solve many natural language processing problems is called a transformer, which uses techniques called positioning and self-attention to achieve linguistic miracles. Every token [...] is affixed a value, which establishes its position in a sequence. The positioning allows for “self-attention” -- the machine learns not just what a token is and where and when it is but how it relates to all the other tokens in a sequence. Any word has meaning only insofar as it relates to the position of every other word. Context registers as mathematics. This is the splitting of the linguistic atom.

Transformers figure out the deep structures of language, well above and below the level of anything people can understand about their own language. That is exactly what is so troubling. What will we find out about how we mean things? (MORE - missing details)
Reply
#2
Syne Offline
(Aug 20, 2021 05:08 PM)C C Wrote: [...] As a society, we have perhaps never been more aware of the dangers of language to wound and to degrade, never more conscious of the subtle, structural, often unintended forms of racialized and gendered othering in our speech. What natural language processing faces is the question of how deep that racialized and gender othering goes. ...

I had a feeling this article would get there. Pushing a decidedly leftist notion of the dangers of language, that are then used to try limiting speech or ruining people's livelihoods.
Reply
#3
C C Offline
(Aug 20, 2021 05:08 PM)C C Wrote: https://www.newyorker.com/culture/cultur...ot-problem

EXCERPTS: [...] Brian Christian’s recent book, “The Alignment Problem,” wrangles some of the initial attempts to reconcile artificial intelligence with human values. ...

Co-operation is just one theme falling out of evolution (another is selfish replication). And certainly the universe at large doesn't give a flip about life, the individual, and its sensibilities at all. 

So who knows, it might be what is outputted by machine learning that is actually more objective or impartial in its evaluation of data. That is, prior to the humans plugging in their adjustments -- their contemporary, pseudo-dogooder conception of what's going on that's heavily, intellectually descended from the French Revolution. Afterwards skewering the computers into sharing the quasi-fantasy la-la world we project upon natural affairs (or at least that tiny, social terrain dot on the latter). 

We can't stand some unfiltered obscenity of reality that a machine delivers, so the machine figuratively has to be taken to the Clockwork Orange correction facility to have its processes rehabilitated.
Reply
#4
stryder Offline
The main problem is crunching a data set that actually lacks so much data.

Why do people say what they say?

Some of it's emotional, some of it's a rhetort, sometimes it's their position that what they say is being helpful offering guidance to others. In all those situations if you took their outputs as just the information for a dataset, you are going to get radically altered outcomes from any AI you attempt to train. For the most part it will be based on frequency and frequency (the number of occurances) isn't necessarily factor to the number of seperate subjects (and their emotional states).

In otherwords one profane troll could be enough to cause an AI to meltdown in selfdestructive hatred torrent, if of course the profane troll has trolled enough to fill the dataset quota.

If/when we get to the point of applying peoples emotional states, then you'll see a fairly different system emerge. Yes it might well become abusive should you rub it the wrong way, but that's part of the intellectual gambit that people are playing with.
Reply
#5
C C Offline
(Aug 21, 2021 02:18 PM)stryder Wrote: The main problem is crunching a data set that actually lacks so much data.

Why do people say what they say?

Some of it's emotional, some of it's a rhetort, sometimes it's their position that what they say is being helpful offering guidance to others. In all those situations if you took their outputs as just the information for a dataset, you are going to get radically altered outcomes from any AI you attempt to train. For the most part it will be based on frequency and frequency (the number of occurances) isn't necessarily factor to the number of seperate subjects (and their emotional states).

In otherwords one profane troll could be enough to cause an AI to meltdown in selfdestructive hatred torrent, if of course the profane troll has trolled enough to fill the dataset quota.

If/when we get to the point of applying peoples emotional states, then you'll see a fairly different system emerge. Yes it might well become abusive should you rub it the wrong way, but that's part of the intellectual gambit that people are playing with.


Basically why there need to be rules or behavior governing systems, even if they're too overly idealistic for mortals living in reality (rather than on paper) to adhere to in practice 100% of the time.

To not be left at the mercy of conduct regulated by meaning derived from dictionary relationships and statistical probability in terms of what people most often tend to do in _X_ scenario. Especially if _X_ scenario is a rarely occurring one with few instances in the information pile, or perhaps usually encountered by or associated with a fringe population group of dubious character.

Dogma to fill in the cracks, if nothing else, send out a red alert when one has drifted too far from the shore.

Generalizations abstracted from mass activity don't even, in themselves, provide reasons for why most citizens often do _H_ -- or lesser options _F_ and _G_.
Reply
#6
Leigha Offline
The problem seems to be that tech companies aren't very good at self-regulating, and as ethical concerns mount, I can see the government intervening, eventually. The ''appeal'' of algorithm decision-making is objectivity (I thought?), but if AI turns capable of bias and discrimination, who will be to blame?
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Artificial intelligence & personhood (crash course philosophy) C C 1 60 Jul 24, 2023 02:11 AM
Last Post: Syne
  Article The laws of nature explain very little (philosophy of science) C C 1 94 Jun 18, 2023 10:01 PM
Last Post: Magical Realist
  Artificial intelligence does not disprove God and the after-life Ostronomos 0 49 Jun 21, 2022 04:32 PM
Last Post: Ostronomos
  Biology won’t solve your problems with abortion (philosophy of biology) C C 1 113 Oct 7, 2021 06:51 PM
Last Post: Syne
  Artificial Intelligence Ostronomos 2 116 Mar 20, 2021 07:35 PM
Last Post: C C
Question Problems may be inevitable, why not solutions? Leigha 2 204 Dec 1, 2020 12:56 AM
Last Post: C C
  The good scientist: Addressing the problems of today & the future C C 0 142 May 31, 2020 11:39 PM
Last Post: C C
  New study offers clues to origin of laws + Bees may create mental imagery C C 0 129 Feb 24, 2020 06:49 PM
Last Post: C C
  (UK) Ethical veganism declared protected philosophical belief at tribunal C C 1 311 Jan 5, 2020 05:27 PM
Last Post: Zinjanthropos
  Has physics ever been deterministic? + Why the laws of nature are not inevitable C C 1 263 Dec 16, 2019 04:09 AM
Last Post: Yazata



Users browsing this thread: 1 Guest(s)