Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim

#1
C C Offline
https://arstechnica.com/science/2022/06/...-sentient/

EXCERPTS: Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.” ... Washington Post [...characterized Blake Lemoine] as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion...

[...] At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.

Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”

At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”

Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”

[...] A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in a second Medium post [...] that ... Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”

[...] Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge.” He added: “No evidence that its large language models have any of them.”

Others were more sympathetic... (MORE - missing details)
Reply
#2
stryder Offline
(Jun 13, 2022 05:21 PM)C C Wrote: https://arstechnica.com/science/2022/06/...-sentient/

EXCERPTS: Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.” ... Washington Post [...characterized Blake Lemoine] as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion...

[...] At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.

Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”

At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”

Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”

[...] A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in a second Medium post [...] that ... Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”

[...] Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge.” He added: “No evidence that its large language models have any of them.”

Others were more sympathetic... (MORE - missing details)

One conversation doesn't imply intelligence, it can just be a probabilistic outcome.

Pick heads or tails, flip a coin... if it comes up your what you picked, is it a response to your observation or just probabilistic determinism. A system designed to fill it's existance with syntax and bridge content is still following a planned, albeit probabilistic outcome.

Patching together a response in dialogue from "nuggets" dropped by "helpful wellwishing paid employees" doesn't define sentience, it's cut and paste.

If anything it's a glorified "magic eightball" that sometimes comes up with something that sounds meaningful, however the only way to really understand if that is the case it to repeat the same conversation.

Does the conversation follow the same path? Does it randomise or interject considering that it's past conversation are identical etc?

Only through empirical outcome from multiple conversation (with and without resets) would a person truly be able to make such a statement... to jump the gun leads them prone to being manipulated due to internal or external "gags/humiliation". There is then the subsiquent testing the objectivity of the observer by interjecting "belief", to see if they jump the gun to the outcome or "rinse, reroll and repeat" objectively for an empirical outcome.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Perspective paper explores the debate over sentient machines C C 1 52 Feb 3, 2024 08:02 PM
Last Post: Magical Realist
  Article AI outperforms conventional weather forecasting for the first time: Google study C C 0 84 Nov 16, 2023 01:23 AM
Last Post: C C
  Article Can a chatbot be aware that it’s not aware? C C 1 101 Oct 16, 2023 10:28 PM
Last Post: confused2
  Google fires employee who claimed AI system had become sentient Magical Realist 2 182 Oct 1, 2022 02:24 PM
Last Post: Zinjanthropos
  An FCC commissioner wants TikTok removed from Apple & Google App stores C C 0 68 Jun 30, 2022 07:15 AM
Last Post: C C
  Magnetic slime robot + Google autocomplete helps legitimize conspiracy theorists C C 0 52 Mar 31, 2022 07:42 PM
Last Post: C C
  What’s the real science behind Google’s time crystal? (quantum computing) C C 1 96 Sep 18, 2021 07:45 PM
Last Post: Syne
  Trust me, I’m a chatbot: Companies using them in customer services + Data privacy C C 0 84 Jul 15, 2021 05:39 PM
Last Post: C C
  Unique design for brain-like computations + Massive spying on users of Google Chrome C C 0 162 Jun 18, 2020 11:04 PM
Last Post: C C
  Google will publish people's movements to assist the Global Plague Police State ;-) C C 1 159 Apr 5, 2020 07:43 AM
Last Post: Yazata



Users browsing this thread: 1 Guest(s)