
ChatGPT can’t think – consciousness is something entirely different to today’s AI
https://theconversation.com/chatgpt-cant...-ai-204823
EXCERPTS (Philip Goff): There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). But can these systems really think and understand?
[...] It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything.
[...] How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.
By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.
This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.
[...] As I argue in my forthcoming book “Why? The Purpose of the Universe”, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness.
If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms.
My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can’t be explained by currently known chemistry and physics. Already, some neuroscientists are seeking potential new explanations for consciousness to supplement the basic equations of physics.
[...] While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious... (MORE - missing details)
Chatbots don’t know what stuff isn’t
https://www.quantamagazine.org/ai-like-c...-20230512/
INTRO: Nora Kassner suspected her computer wasn’t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google’s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users’ sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better).
But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean — and what they don’t mean. It’s one thing to know that a bird can fly. “A model should automatically also know that the negated statement — ‘a bird cannot fly’ — is false,” she said. But when she and her adviser, Hinrich Schütze, tested BERT and two other LLMs in 2019, they found that the models behaved as if words like “not” were invisible.
Since then, LLMs have skyrocketed in size and ability. “The algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,” said Ding Zhao, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.
But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.
“Large language models work better than any system we have ever had before,” said Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology. “Why do they struggle with something that’s seemingly simple while it’s demonstrating amazing power in other things that we don’t expect it to?” Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don’t understand whether machines will ever truly know the word “no.” (MORE - details)
https://theconversation.com/chatgpt-cant...-ai-204823
EXCERPTS (Philip Goff): There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). But can these systems really think and understand?
[...] It doesn’t consciously understand the meaning of the words it’s spitting out. If “thought” means the act of conscious reflection, then ChatGPT has no thoughts about anything.
[...] How can I be so sure that ChatGPT isn’t conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the “neural correlates of consciousness” in 25 years.
By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It’s about time Koch paid up, as there is zero consensus that this has happened.
This is because consciousness can’t be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientists must rely on their subjects’ testimony, or on external markers of consciousness. But there are multiple ways of interpreting the data.
[...] As I argue in my forthcoming book “Why? The Purpose of the Universe”, consciousness must have evolved because it made a behavioural difference. Systems with consciousness must behave differently, and hence survive better, than systems without consciousness.
If all behaviour was determined by underlying chemistry and physics, natural selection would have no motivation for making organisms conscious; we would have evolved as unfeeling survival mechanisms.
My bet, then, is that as we learn more about the brain’s detailed workings, we will precisely identify which areas of the brain embody consciousness. This is because those regions will exhibit behaviour that can’t be explained by currently known chemistry and physics. Already, some neuroscientists are seeking potential new explanations for consciousness to supplement the basic equations of physics.
[...] While the processing of LLMs is now too complex for us to fully understand, we know that it could in principle be predicted from known physics. On this basis, we can confidently assert that ChatGPT is not conscious... (MORE - missing details)
Chatbots don’t know what stuff isn’t
https://www.quantamagazine.org/ai-like-c...-20230512/
INTRO: Nora Kassner suspected her computer wasn’t as smart as people thought. In October 2018, Google released a language model algorithm called BERT, which Kassner, a researcher in the same field, quickly loaded on her laptop. It was Google’s first language model that was self-taught on a massive volume of online data. Like her peers, Kassner was impressed that BERT could complete users’ sentences and answer simple questions. It seemed as if the large language model (LLM) could read text like a human (or better).
But Kassner, at the time a graduate student at Ludwig Maximilian University of Munich, remained skeptical. She felt LLMs should understand what their answers mean — and what they don’t mean. It’s one thing to know that a bird can fly. “A model should automatically also know that the negated statement — ‘a bird cannot fly’ — is false,” she said. But when she and her adviser, Hinrich Schütze, tested BERT and two other LLMs in 2019, they found that the models behaved as if words like “not” were invisible.
Since then, LLMs have skyrocketed in size and ability. “The algorithm itself is still similar to what we had before. But the scale and the performance is really astonishing,” said Ding Zhao, who leads the Safe Artificial Intelligence Lab at Carnegie Mellon University.
But while chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.
“Large language models work better than any system we have ever had before,” said Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology. “Why do they struggle with something that’s seemingly simple while it’s demonstrating amazing power in other things that we don’t expect it to?” Recent studies have finally started to explain the difficulties, and what programmers can do to get around them. But researchers still don’t understand whether machines will ever truly know the word “no.” (MORE - details)