Jul 1, 2023 08:10 PM
Since even a mouse has phenomenal consciousness, I agree with her or Graham that an AI with human-level sapience and beyond would potentially be more dangerous (either as rogue intelligence or insufficient intelligence) than both a "slow-witted" organism with consciousness and a philosophical zombie (machine) that was "stupid". This might actually be more about IQ testing (thus slotting it under A&P), though you might not think that from the excerpts.
- - - - - - - - -
Consciousness is overrated! I think Intelligence is more important.
https://youtu.be/T3EHANFeyns
VIDEO EXCERPTS (Sabine Hossenfelder): [...] When it comes to AI, the question of consciousness receives all the attention, but intelligence is much more difficult to define. For example, few of my colleagues would doubt I’m conscious, but opinions on my intelligence diverge.
While there is no agreed upon definition for intelligence, there are two ingredients that I believe most of us would agree are aspects of intelligence. First, there’s the ability to solve a large variety of problems, especially new ones. This requires knowledge transfer, creativity, and the ability to learn from mistakes. Second, there is the ability to think abstractly, to understand concepts and their relations, and deduce new properties. This relies heavily on the use of logic and reasoning. The relation between knowledge and intelligence is especially difficult to untangle.
This is well illustrated by a 1980 thought experiment from the philosopher John Searle, known as the Chinese Room. Searle asks us to imagine we’re in a room with a book that contains instructions for responding to Chinese text. From outside the room, people send in pieces of paper with Chinese characters, because that’s totally a thing people do. We use our book to figure out how to respond to these messages and return an answer.
The person outside might get away thinking there’s someone in the room who understands Chinese, even though that isn’t so. Searle used this thought experiment to argue that a piece of software doesn’t really understand what it’s doing.
[...] while the Chinese Room doesn’t tell us much about understanding, it illustrates nicely the difference between knowledge and intelligence.
In the Chinese Room, we with our instruction book undoubtedly have knowledge. But we’re not intelligent. We can’t solve any new problems. We can’t transfer knowledge. We have no capacity for abstract thought or logical reasoning. We’re just following instructions.
But as long as the person outside is just asking for knowledge, they won’t be able to tell whether we’re intelligent. Ok, so we have a vague intuitive idea for what it means to be intelligent, but can we make this more precise?
[...] as we’ve all learned in the past couple of months, ChatGPT’s knowledge is extremely narrow, even when it comes to verbal intelligence. A stunning example is that while it’s been trained on text, it doesn’t understand the idea of letters. If you ask it to write a paragraph on a particular topic, say, animals, that doesn’t contain, say, the letter “n”, it has no idea what to do, but doesn’t know that it doesn’t know what to do.
It also can’t count the number of letters in a sentence, though it will still answer the question if you ask it to. This tells us three things. First, the issue isn’t that an AI doesn’t know some things but rather that it doesn’t know what it doesn’t know.
Second, in practice we often measure dumbness rather than intelligence. We look for negatives. We want to know where someone or something fails, and use that to assess its intelligence.
Third, we use our own intelligence to search for these failures, which is an approach that will inevitably, eventually fail.
[...] There are further tests that are specifically designed for AIs, such as variants of the Bongard Problems. [...] In a 2020 study, researchers found that AIs reach 60-70 percent accuracy, whereas humans tend to reach more than 90 percent. So, this one’s still somewhat of a challenge for AIs. Good news for our egos, I guess.
[...] AIs have always been ahead of humans in terms of memory and processing speed, and have recently rapidly caught up on verbal and visual tasks. While they’re still nowhere near humans in terms of general reasoning skills, I think it’s only a matter of time until they get there. And I agree with Paul Graham, we should worry less about consciousness and more about intelligence. Because of the two, intelligence is much more dangerous...
- - - - - - - - -
Consciousness is overrated! I think Intelligence is more important.
https://youtu.be/T3EHANFeyns
VIDEO EXCERPTS (Sabine Hossenfelder): [...] When it comes to AI, the question of consciousness receives all the attention, but intelligence is much more difficult to define. For example, few of my colleagues would doubt I’m conscious, but opinions on my intelligence diverge.
While there is no agreed upon definition for intelligence, there are two ingredients that I believe most of us would agree are aspects of intelligence. First, there’s the ability to solve a large variety of problems, especially new ones. This requires knowledge transfer, creativity, and the ability to learn from mistakes. Second, there is the ability to think abstractly, to understand concepts and their relations, and deduce new properties. This relies heavily on the use of logic and reasoning. The relation between knowledge and intelligence is especially difficult to untangle.
This is well illustrated by a 1980 thought experiment from the philosopher John Searle, known as the Chinese Room. Searle asks us to imagine we’re in a room with a book that contains instructions for responding to Chinese text. From outside the room, people send in pieces of paper with Chinese characters, because that’s totally a thing people do. We use our book to figure out how to respond to these messages and return an answer.
The person outside might get away thinking there’s someone in the room who understands Chinese, even though that isn’t so. Searle used this thought experiment to argue that a piece of software doesn’t really understand what it’s doing.
[...] while the Chinese Room doesn’t tell us much about understanding, it illustrates nicely the difference between knowledge and intelligence.
In the Chinese Room, we with our instruction book undoubtedly have knowledge. But we’re not intelligent. We can’t solve any new problems. We can’t transfer knowledge. We have no capacity for abstract thought or logical reasoning. We’re just following instructions.
But as long as the person outside is just asking for knowledge, they won’t be able to tell whether we’re intelligent. Ok, so we have a vague intuitive idea for what it means to be intelligent, but can we make this more precise?
[...] as we’ve all learned in the past couple of months, ChatGPT’s knowledge is extremely narrow, even when it comes to verbal intelligence. A stunning example is that while it’s been trained on text, it doesn’t understand the idea of letters. If you ask it to write a paragraph on a particular topic, say, animals, that doesn’t contain, say, the letter “n”, it has no idea what to do, but doesn’t know that it doesn’t know what to do.
It also can’t count the number of letters in a sentence, though it will still answer the question if you ask it to. This tells us three things. First, the issue isn’t that an AI doesn’t know some things but rather that it doesn’t know what it doesn’t know.
Second, in practice we often measure dumbness rather than intelligence. We look for negatives. We want to know where someone or something fails, and use that to assess its intelligence.
Third, we use our own intelligence to search for these failures, which is an approach that will inevitably, eventually fail.
[...] There are further tests that are specifically designed for AIs, such as variants of the Bongard Problems. [...] In a 2020 study, researchers found that AIs reach 60-70 percent accuracy, whereas humans tend to reach more than 90 percent. So, this one’s still somewhat of a challenge for AIs. Good news for our egos, I guess.
[...] AIs have always been ahead of humans in terms of memory and processing speed, and have recently rapidly caught up on verbal and visual tasks. While they’re still nowhere near humans in terms of general reasoning skills, I think it’s only a matter of time until they get there. And I agree with Paul Graham, we should worry less about consciousness and more about intelligence. Because of the two, intelligence is much more dangerous...