Article  ChatGPT can’t think (consciousness) + Chatbots don’t know what stuff isn’t

#1
C C Offline
ChatGPT can’t think – consciousness is something entirely different to today’s AI
https://theconversation.com/chatgpt-cant...-ai-204823

EXCERPTS (Philip Goff): There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). But can these systems really think and understand?

[...] It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything.

[...] How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.

By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.

This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.

[...] As I argue in my forthcoming book “Why? The Purpose of the Universe”, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness.

If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms.

My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can’t be explained by currently known chemistry and physics. Already, some neuroscientists are seeking potential new explanations for consciousness to supplement the basic equations of physics.

[...] While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious... (MORE - missing details)


Chatbots don’t know what stuff isn’t
https://www.quantamagazine.org/ai-like-c...-20230512/

INTRO: Nora Kassner suspected her computer wasn’t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google’s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users’ sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better).

But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean — and what they don’t mean. It’s one thing to know that a bird can fly. “A model should automatically also know that the negated statement — ‘a bird cannot fly’ — is false,” she said. But when she and her adviser, Hinrich Schütze, tested BERT and two other LLMs in 2019, they found that the models behaved as if words like “not” were invisible.

Since then, LLMs have skyrocketed in size and ability. “The algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,” said Ding Zhao, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.

But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.

“Large language models work better than any system we have ever had before,” said Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology. “Why do they struggle with something that’s seemingly simple while it’s demonstrating amazing power in other things that we don’t expect it to?” Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don’t understand whether machines will ever truly know the word “no.” (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Can you be aware of nothing? + Consciousness can't be uploaded C C 1 268 Sep 3, 2025 08:07 PM
Last Post: Magical Realist
  Consciousness is not an illusion: It is the Theatre where Illusions Can Happen Ostronomos 3 434 Aug 2, 2025 06:22 PM
Last Post: Magical Realist
  Article Miscellaneous stuff on mind/body problem & Pascal's Wager C C 0 363 Mar 5, 2025 07:11 PM
Last Post: C C
  Research Can consciousness exist in a computer simulation? C C 2 578 Jul 23, 2024 08:59 PM
Last Post: Ostronomos
  Is the reality you experience objective? These neuroscientists don't think so. C C 4 782 Sep 2, 2023 03:06 PM
Last Post: Zinjanthropos
  Professor catches student cheating with ChatGPT: 'I feel abject terror' C C 0 340 Dec 31, 2022 11:24 PM
Last Post: C C
  God consciousness is connective consciousness Ostronomos 3 853 Jul 29, 2021 09:56 PM
Last Post: Zinjanthropos
  You don’t have free will, but don’t worry. (Sabine Hossenfelder) C C 99 16,438 Nov 8, 2020 02:22 AM
Last Post: Syne
  Can you know everything about colour if you see in B&W... C C 0 556 Sep 21, 2020 07:54 PM
Last Post: C C
  Mistaking meta-consciousness for consciousness (and vice-versa) C C 0 753 Sep 25, 2017 10:15 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)