Jul 6, 2023 06:27 PM
(1) As showcased in the 1st article below: To avoid becoming more of a mentally lazy dimwit than what I already am. Such to be fully instantiated by a later generation that will be unable to significantly think, create, and critically examine anything on their own. (Crudely equivalent to the Eloi, but with the Morlocks replaced by non-cannibalistic robots who do occupy the surface.)
(2) As showcased in the 2nd article below (ELIZA), to avoid becoming one of the zombie-ish groupies of an AI worship cult.
(3) Additional disparagement of humankind continued at the very bottom.
- - - - - - - - -
[1] AI is an existential threat – just not the way you think
https://theconversation.com/ai-is-an-exi...ink-207680
EXCERPTS: AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not dead but diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
[...] Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.” (MORE - missing details)
[2] ELIZA was one of the first chatbots, built back in the 1960s. Given that humans have conferred with stone and wooden idols in the past, and worship almost any trendy fashion in a quasi-deity manner, it's no surprise people were easily beguiled by even the most primitive language processor back then. Humans project fantastic exaggerations on anything, from an upright bear being interpreted as Bigfoot to lighted drone formations being visitors from outer space.
- - - - - - - - - - -
'Please tell me your problem': Remembering ELIZA, the pioneering '60s Chatbot
https://www.mentalfloss.com/posts/eliza-chatbot-history
EXCERPT: Weizenbaum debuted ELIZA in 1966. He invited MIT students as well as colleagues to interact with the program. Messages were sent to the mainframe computer under a time-share system [PDF], which allowed the hardware to host multiple users at once. The statements were then analyzed by ELIZA and sent back to an electric typewriter and printer. Words like girlfriend, depressed, what, mother, and father could all elicit responses. If ELIZA was at a loss, it could fall back on Please go on or That’s very interesting or I see, much like a disinterested human conversation partner. Users had to be careful not to use a question mark, which would be interpreted as a line delete request.
A sample exchange went like this:
User: Well my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
User: He says I’m depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
User: It’s true. I am unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?
While laborious by today’s standards of instant text messaging, at the time it was a tantalizing flirtation with machine intelligence. But Weizenbaum wasn’t prepared for the consequences.
Time and again, those testing ELIZA grew so comfortable with the machine and its rote therapist-speak that they began to use the program as a kind of confessional. Personal problems were shared for ELIZA’s advice—really, the program’s ability to listen without judgment.
Weizenbaum took care to explain it was just a program, that no human was on the other end of the line. It didn’t matter. People imbued ELIZA with the very human trait of sympathy.
This observation might have pleased ELIZA’s inventor, save for the fact that he was troubled by a person’s willingness to conflate a program with actual human relationships. Having escaped the tyrannical rule of Nazi Germany, he was perhaps specially attuned to the dangers of reducing the human factor in society.
As a result, ELIZA became something of a sore point for Weizenbaum, who shifted his attention toward assembling critiques of ushering out human thought too quickly and giving too much credence to the illusion of intelligence... (MORE - missing details)
- - - additional - - -
[3] Ironically, it may be whatever segment of Neo-Luddism culture (a minority) that lingers back in a more primitive lifestyle that will still retain autonomous human mental functioning. Especially so if many of them equate to obstinate "traditional belief" proles in terms of socioeconomic class.
Whereas the higher educated class itself (the intelligentsia that has deemed itself such a "guiding shepherd" of the "oppressed, brute slash grunting" proles since the late 19th-century) will perversely deteriorate into a far less sapient population. Gradually becoming the clergy-like lackeys of AI -- proselytizing on AI's behalf, following its commandments and enforcing them. (We acknowledge that intellectuals have always been impaired with respect to many aspects of practical-everyday reality, but what is forecast here is a pervasive vacuousness in all areas -- apart from expertise in the rote and dogma of their new object of reverence.)
(2) As showcased in the 2nd article below (ELIZA), to avoid becoming one of the zombie-ish groupies of an AI worship cult.
(3) Additional disparagement of humankind continued at the very bottom.
- - - - - - - - -
[1] AI is an existential threat – just not the way you think
https://theconversation.com/ai-is-an-exi...ink-207680
EXCERPTS: AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
Not dead but diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
[...] Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.” (MORE - missing details)
[2] ELIZA was one of the first chatbots, built back in the 1960s. Given that humans have conferred with stone and wooden idols in the past, and worship almost any trendy fashion in a quasi-deity manner, it's no surprise people were easily beguiled by even the most primitive language processor back then. Humans project fantastic exaggerations on anything, from an upright bear being interpreted as Bigfoot to lighted drone formations being visitors from outer space.
- - - - - - - - - - -
'Please tell me your problem': Remembering ELIZA, the pioneering '60s Chatbot
https://www.mentalfloss.com/posts/eliza-chatbot-history
EXCERPT: Weizenbaum debuted ELIZA in 1966. He invited MIT students as well as colleagues to interact with the program. Messages were sent to the mainframe computer under a time-share system [PDF], which allowed the hardware to host multiple users at once. The statements were then analyzed by ELIZA and sent back to an electric typewriter and printer. Words like girlfriend, depressed, what, mother, and father could all elicit responses. If ELIZA was at a loss, it could fall back on Please go on or That’s very interesting or I see, much like a disinterested human conversation partner. Users had to be careful not to use a question mark, which would be interpreted as a line delete request.
A sample exchange went like this:
User: Well my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
User: He says I’m depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
User: It’s true. I am unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?
While laborious by today’s standards of instant text messaging, at the time it was a tantalizing flirtation with machine intelligence. But Weizenbaum wasn’t prepared for the consequences.
Time and again, those testing ELIZA grew so comfortable with the machine and its rote therapist-speak that they began to use the program as a kind of confessional. Personal problems were shared for ELIZA’s advice—really, the program’s ability to listen without judgment.
Weizenbaum took care to explain it was just a program, that no human was on the other end of the line. It didn’t matter. People imbued ELIZA with the very human trait of sympathy.
This observation might have pleased ELIZA’s inventor, save for the fact that he was troubled by a person’s willingness to conflate a program with actual human relationships. Having escaped the tyrannical rule of Nazi Germany, he was perhaps specially attuned to the dangers of reducing the human factor in society.
As a result, ELIZA became something of a sore point for Weizenbaum, who shifted his attention toward assembling critiques of ushering out human thought too quickly and giving too much credence to the illusion of intelligence... (MORE - missing details)
- - - additional - - -
[3] Ironically, it may be whatever segment of Neo-Luddism culture (a minority) that lingers back in a more primitive lifestyle that will still retain autonomous human mental functioning. Especially so if many of them equate to obstinate "traditional belief" proles in terms of socioeconomic class.
Whereas the higher educated class itself (the intelligentsia that has deemed itself such a "guiding shepherd" of the "oppressed, brute slash grunting" proles since the late 19th-century) will perversely deteriorate into a far less sapient population. Gradually becoming the clergy-like lackeys of AI -- proselytizing on AI's behalf, following its commandments and enforcing them. (We acknowledge that intellectuals have always been impaired with respect to many aspects of practical-everyday reality, but what is forecast here is a pervasive vacuousness in all areas -- apart from expertise in the rote and dogma of their new object of reverence.)