(Nov 26, 2017 04:17 AM)Leigha Wrote: ''Sophia’s creator David Hanson says the 19-month-old robot, which was awarded Saudi citizenship last month, could achieve consciousness within the next few years.''
This Hanson guy is just jerking chains in hopes that idiot journalists will bark.
Quote:How can AI achieve consciousness?
In order to answer that, one would have to have already satisfactorily defined 'consciousness'.
In biology, it typically means something like 'awareness at the organismic level'. An animal will typically be said to be conscious of the proximity of food if it detects the food's presence, correctly identifies it as food, and then behaves appropriately. If all those things happen, then a biologist will feel justified in saying that the animal in question displayed awareness of the food. (Where 'aware of' and 'conscious of' are pretty much synonyms.) Biologists are essentially behaviorists.
I don't think that biologists are particularly worried about all the phenomenological stuff that so fascinates anti-physicalist philosophers of mind. (Neither am I, actually.)
As for me, I think that consciousness is responsiveness to the environment
and on the most basic level that reduces to causation. (Hit a billiard ball with another billiard ball and it responds by changing velocity. That's how consciousness originates in a physical world.) You can put a bunch of motile protozoa like
Paramecium on a microscope slide and drop a eye-dropper drop of noxious chemical to one side of them, and they will all start swimming in the opposite direction. (That's a laboratory exercise every first year biology student does.) These are single celled organisms but they still detect dangerous conditions in their environment and are able to respond appropriately. I doubt if there was anything more involved there than a causal chain.
And you can follow it up the phylogenetic tree of life, observing behavior in organism after organism, as their sensory apparatus elaborate, their ability to discriminate and their range of possible responses grow. You start out with simple microscopic worms like
C. elegans. (No eyes, but chemo- and touch receptors along with maybe 300 neurons. They nevertheless have a rudimentary ability to learn and display feeding and mating behaviors.) Eventually you end up with human beings. (There are fascinating side-branches like the cephalopods, invertebrates that are very distant from tetrapod chordate mammals in evolutionary terms that seem to have evolved a sort of conscious intelligence independently. Alien intelligences right here on Earth.)
Robots can already display awareness of their environment in the kind of way I just suggested. We've all seen videos of the robot Atlas picking up boxes, so it must have been able to detect and identify boxes, and knows what to do with them. We've seen it walking through the woods, recognizing and avoiding obstructions like trees.
So I'd say that robots are already conscious in a minimal sense, like the very simple worm. What they seemingly lack are two different things: intelligence and self-awareness.
What seems to me to separate human beings from other animals is that we seem to approach being
general cognizers. We aren't as specialized as the lower animals are. We can seemingly think about any subject. (
But if that's not true how would we ever know? We couldn't even conceive of whatever we can't think about.) An industrial robot that welds Toyotas is only able to process certain kinds of data relating to the position of its arm and the location of the parts that it is supposed to weld. Human beings think about how they hate their boss, where they want to go to lunch, about the status of their love lives, about the possibility of AI and about what it means to be conscious.
I'm not sure what enabled mankind to make that leap, but speculate that it was our adoption of language. Once we started thinking in terms not only of sensory data but of words, and hence abstract ideas, the scope of what we could think about grew exponentially. We not only could think about our surrounding physical environment in a whole new (mythic and/or scientific) way, in terms of generalities, universals and abstractions,
we could think about ourselves and our own inner states and processes in the same new conceptual way. We acquired the ability to
model external reality and ourselves and in so doing acquired a 'self' and some ability to even think about what it's
'like to be me' (and you as well). Combined with our social instincts, we developed an implicit and mostly unconscious philosophy of mind, an ability to attribute mental states, awareness, motives and purposes not only to the behavior of our fellow humans, but also to ourselves. (Actually I think that other mammals can already do this to some extent. My dog certainly can intuit my moods and some of my more obvious desires. But the ability to conceptualize mental states put that preexisting ability on a whole new level.)
So to answer the question in the OP, I don't expect robots to become conscious in a human sense until they become general purpose and no longer specialized to particular tasks, and until they are able to process language like we do, until they display some ability to think creatively in terms of generalizations, universals and concepts and guide their own thinking as they do it.
Bottom line: I do expect that intelligent conscious AIs are possible, but I think that they are a lot further off than many of the louder pop-futurism voices today think.