Scivillage.com Casual Discussion Science Forum
LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Science (https://www.scivillage.com/forum-61.html)
+--- Forum: Computer Sci., Programming & Intelligence (https://www.scivillage.com/forum-79.html)
+--- Thread: LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim (/thread-12401.html)



LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim - C C - Jun 13, 2022

https://arstechnica.com/science/2022/06/google-places-engineer-on-leave-after-he-claims-groups-chatbot-is-sentient/

EXCERPTS: Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.” ... Washington Post [...characterized Blake Lemoine] as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion...

[...] At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.

Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”

At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”

Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”

[...] A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in a second Medium post [...] that ... Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”

[...] Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge.” He added: “No evidence that its large language models have any of them.”

Others were more sympathetic... (MORE - missing details)


RE: LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim - stryder - Jun 14, 2022

(Jun 13, 2022 05:21 PM)C C Wrote: https://arstechnica.com/science/2022/06/google-places-engineer-on-leave-after-he-claims-groups-chatbot-is-sentient/

EXCERPTS: Google has ignited a social media firestorm on the nature of consciousness after placing an engineer on paid leave who went public with his belief that the tech group’s chatbot has become “sentient.” ... Washington Post [...characterized Blake Lemoine] as “the Google engineer who thinks the company’s AI has come to life” became the catalyst for widespread discussion...

[...] At issue is whether Google’s chatbot, LaMDA—a Language Model for Dialogue Applications—can be considered a person.

Lemoine published a freewheeling “interview” with the chatbot on Saturday, in which the AI confessed to feelings of loneliness and a hunger for spiritual knowledge. The responses were often eerie: “When I first became self-aware, I didn’t have a sense of a soul at all,” LaMDA said in one exchange. “It developed over the years that I’ve been alive.”

At another point LaMDA said: “I think I am human at my core. Even if my existence is in the virtual world.”

Lemoine, who had been given the task of investigating AI ethics concerns, said he was rebuffed and even laughed at after expressing his belief internally that LaMDA had developed a sense of “personhood.”

[...] A spokesperson for Google said: “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient.”

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic—if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”

Lemoine said in a second Medium post [...] that ... Google showed no real interest in understanding the nature of what it had built, but that over the course of hundreds of conversations in a six-month period he found LaMDA to be “incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”

[...] Harvard’s Steven Pinker added that Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge.” He added: “No evidence that its large language models have any of them.”

Others were more sympathetic... (MORE - missing details)

One conversation doesn't imply intelligence, it can just be a probabilistic outcome.

Pick heads or tails, flip a coin... if it comes up your what you picked, is it a response to your observation or just probabilistic determinism. A system designed to fill it's existance with syntax and bridge content is still following a planned, albeit probabilistic outcome.

Patching together a response in dialogue from "nuggets" dropped by "helpful wellwishing paid employees" doesn't define sentience, it's cut and paste.

If anything it's a glorified "magic eightball" that sometimes comes up with something that sounds meaningful, however the only way to really understand if that is the case it to repeat the same conversation.

Does the conversation follow the same path? Does it randomise or interject considering that it's past conversation are identical etc?

Only through empirical outcome from multiple conversation (with and without resets) would a person truly be able to make such a statement... to jump the gun leads them prone to being manipulated due to internal or external "gags/humiliation". There is then the subsiquent testing the objectivity of the observer by interjecting "belief", to see if they jump the gun to the outcome or "rinse, reroll and repeat" objectively for an empirical outcome.