Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

How did fundamental consciousness arise from basic chemistry?

#11
Ostronomos Offline
(Jul 23, 2018 07:50 PM)Zinjanthropos Wrote: Still sounds like the UC is no different than how we progress without it. To suggest humans are at a certain level UC makes me think that higher intelligences won't be bothering with us when we connect. The universe could be packed with higher intelligences than ours and their disassociation from us could be the case sans UC anyways.

As a human being does altered state include infant,  child, pre-adolescent, adolescent, juvenile, young adult, mature adult minds. Also neurotic, diseased, retarded, damaged, traumatized minds....are they not altered mind  states?

It seems that the term altered state of consciousness covers a broad range of minds. So perhaps they would fit the description although I was not aware of that before. When I think altered state I think of an improved version of consciousness. When the UC is present it is due to the individual observer. When the individual observer generates or connects to the UC there is a certain sense (a 6th sense) in which the individual observer is aware that reality has now taken on a predominantly mental characteristic. There is an awareness that the world is now metaphysical or "supernatural". This need for metaphysics is part of the universe's internal proxy settings that are required to set the very highest limits to accommodate the individual observer's perception of reality, which in some cases is a matter of life and death, and when one walks that fine line between the two, their value is sometimes tested as required by the universe.
Reply
#12
Yazata Offline
(Jul 23, 2018 02:06 PM)Ostronomos Wrote: The elements of life originated from stars in outer space. Hydrogen, Helium and Carbon. It is quite the conundrum to try to understand how consciousness arose from the chemistry of these basic elements. The only reasonable thing we can deduce is that consciousness is universal.

The "only reasonable thing"?? I couldn't disagree more.

I already addressed the question in the subject line, "How did fundamental consciousness arise from basic chemistry?" in an earlier thread about AI:

https://www.scivillage.com/thread-4551-post-16095.html

Here's what I said then --

"In order to answer that, one would have to have already satisfactorily defined 'consciousness'.

In biology, it typically means something like 'awareness at the organismic level'. An animal will typically be said to be conscious of the proximity of food if it detects the food's presence, correctly identifies it as food, and then behaves appropriately. If all those things happen, then a biologist will feel justified in saying that the animal in question displayed awareness of the food. (Where 'aware of' and 'conscious of' are pretty much synonyms.) Biologists are essentially behaviorists.

I don't think that biologists are particularly worried about all the phenomenological stuff that so fascinates anti-physicalist philosophers of mind. (Neither am I, actually.)

As for me, I think that consciousness is responsiveness to the environment and on the most basic level that reduces to causation. (Hit a billiard ball with another billiard ball and it responds by changing velocity. That's how consciousness originates in a physical world.) You can put a bunch of motile protozoa like Paramecium on a microscope slide and drop a eye-dropper drop of noxious chemical to one side of them, and they will all start swimming in the opposite direction. (That's a laboratory exercise every first year biology student does.) These are single celled organisms but they still detect dangerous conditions in their environment and are able to respond appropriately. I doubt if there was anything more involved there than a causal chain.

And you can follow it up the phylogenetic tree of life, observing behavior in organism after organism, as their sensory apparatus elaborate, their ability to discriminate and their range of possible responses grow. You start out with simple microscopic worms like C. elegans. (No eyes, but chemo- and touch receptors along with maybe 300 neurons. They nevertheless have a rudimentary ability to learn and display feeding and mating behaviors.) Eventually you end up with human beings. (There are fascinating side-branches like the cephalopods, invertebrates that are very distant from tetrapod chordate mammals in evolutionary terms that seem to have evolved a sort of conscious intelligence independently. Alien intelligences right here on Earth.)

Robots can already display awareness of their environment in the kind of way I just suggested. We've all seen videos of the robot Atlas picking up boxes, so it must have been able to detect and identify boxes, and knows what to do with them. We've seen it walking through the woods, recognizing and avoiding obstructions like trees.

So I'd say that robots are already conscious in a minimal sense, like the very simple worm. What they seemingly lack are two different things: intelligence and self-awareness.

What seems to me to separate human beings from other animals is that we seem to approach being general cognizers. We aren't as specialized as the lower animals are. We can seemingly think about any subject. (But if that's not true how would we ever know? We couldn't even conceive of whatever we can't think about.) An industrial robot that welds Toyotas is only able to process certain kinds of data relating to the position of its arm and the location of the parts that it is supposed to weld. Human beings think about how they hate their boss, where they want to go to lunch, about the status of their love lives, about the possibility of AI and about what it means to be conscious.

I'm not sure what enabled mankind to make that leap, but speculate that it was our adoption of language. Once we started thinking in terms not only of sensory data but of words, and hence abstract ideas, the scope of what we could think about grew exponentially. We not only could think about our surrounding physical environment in a whole new (mythic and/or scientific) way, in terms of generalities, universals and abstractions, we could think about ourselves and our own inner states and processes in the same new conceptual way. We acquired the ability to model external reality and ourselves and in so doing acquired a 'self' and some ability to even think about what it's 'like to be me' (and you as well). Combined with our social instincts, we developed an implicit and mostly unconscious philosophy of mind, an ability to attribute mental states, awareness, motives and purposes not only to the behavior of our fellow humans, but also to ourselves. (Actually I think that other mammals can already do this to some extent. My dog certainly can intuit my moods and some of my more obvious desires. But the ability to conceptualize mental states put that preexisting ability on a whole new level.)

So to answer the question in the OP, I don't expect robots to become conscious in a human sense until they become general purpose and no longer specialized to particular tasks, and until they are able to process language like we do, until they display some ability to think creatively in terms of generalizations, universals and concepts and guide their own thinking as they do it.

Bottom line: I do expect that intelligent conscious AIs are possible, but I think that they are a lot further off than many of the louder pop-futurism voices today think.
Reply
#13
Ostronomos Offline
(Jul 24, 2018 04:49 PM)Yazata Wrote:
(Jul 23, 2018 02:06 PM)Ostronomos Wrote: The elements of life originated from stars in outer space. Hydrogen, Helium and Carbon. It is quite the conundrum to try to understand how consciousness arose from the chemistry of these basic elements. The only reasonable thing we can deduce is that consciousness is universal.

The "only reasonable thing"?? I couldn't disagree more.

I already addressed the question in the subject line, "How did fundamental consciousness arise from basic chemistry?" in an earlier thread about AI:

https://www.scivillage.com/thread-4551-post-16095.html

Here's what I said then --

"In order to answer that, one would have to have already satisfactorily defined 'consciousness'.

In biology, it typically means something like 'awareness at the organismic level'. An animal will typically be said to be conscious of the proximity of food if it detects the food's presence, correctly identifies it as food, and then behaves appropriately. If all those things happen, then a biologist will feel justified in saying that the animal in question displayed awareness of the food. (Where 'aware of' and 'conscious of' are pretty much synonyms.) Biologists are essentially behaviorists.

I don't think that biologists are particularly worried about all the phenomenological stuff that so fascinates anti-physicalist philosophers of mind. (Neither am I, actually.)

As for me, I think that consciousness is responsiveness to the environment and on the most basic level that reduces to causation. (Hit a billiard ball with another billiard ball and it responds by changing velocity. That's how consciousness originates in a physical world.) You can put a bunch of motile protozoa like Paramecium on a microscope slide and drop a eye-dropper drop of noxious chemical to one side of them, and they will all start swimming in the opposite direction. (That's a laboratory exercise every first year biology student does.) These are single celled organisms but they still detect dangerous conditions in their environment and are able to respond appropriately. I doubt if there was anything more involved there than a causal chain.

And you can follow it up the phylogenetic tree of life, observing behavior in organism after organism, as their sensory apparatus elaborate, their ability to discriminate and their range of possible responses grow. You start out with simple microscopic worms like C. elegans. (No eyes, but chemo- and touch receptors along with maybe 300 neurons. They nevertheless have a rudimentary ability to learn and display feeding and mating behaviors.) Eventually you end up with human beings. (There are fascinating side-branches like the cephalopods, invertebrates that are very distant from tetrapod chordate mammals in evolutionary terms that seem to have evolved a sort of conscious intelligence independently. Alien intelligences right here on Earth.)

Robots can already display awareness of their environment in the kind of way I just suggested. We've all seen videos of the robot Atlas picking up boxes, so it must have been able to detect and identify boxes, and knows what to do with them. We've seen it walking through the woods, recognizing and avoiding obstructions like trees.

So I'd say that robots are already conscious in a minimal sense, like the very simple worm. What they seemingly lack are two different things: intelligence and self-awareness.

What seems to me to separate human beings from other animals is that we seem to approach being general cognizers. We aren't as specialized as the lower animals are. We can seemingly think about any subject. (But if that's not true how would we ever know? We couldn't even conceive of whatever we can't think about.) An industrial robot that welds Toyotas is only able to process certain kinds of data relating to the position of its arm and the location of the parts that it is supposed to weld. Human beings think about how they hate their boss, where they want to go to lunch, about the status of their love lives, about the possibility of AI and about what it means to be conscious.

I'm not sure what enabled mankind to make that leap, but speculate that it was our adoption of language. Once we started thinking in terms not only of sensory data but of words, and hence abstract ideas, the scope of what we could think about grew exponentially. We not only could think about our surrounding physical environment in a whole new (mythic and/or scientific) way, in terms of generalities, universals and abstractions, we could think about ourselves and our own inner states and processes in the same new conceptual way. We acquired the ability to model external reality and ourselves and in so doing acquired a 'self' and some ability to even think about what it's 'like to be me' (and you as well). Combined with our social instincts, we developed an implicit and mostly unconscious philosophy of mind, an ability to attribute mental states, awareness, motives and purposes not only to the behavior of our fellow humans, but also to ourselves. (Actually I think that other mammals can already do this to some extent. My dog certainly can intuit my moods and some of my more obvious desires. But the ability to conceptualize mental states put that preexisting ability on a whole new level.)

So to answer the question in the OP, I don't expect robots to become conscious in a human sense until they become general purpose and no longer specialized to particular tasks, and until they are able to process language like we do, until they display some ability to think creatively in terms of generalizations, universals and concepts and guide their own thinking as they do it.

Bottom line: I do expect that intelligent conscious AIs are possible, but I think that they are a lot further off than many of the louder pop-futurism voices today think.

This might actually give a practical explanation to why the universe has a consciousness of its own as opposed to including some mythical after-life. This consciousness is God. But as for the extent of its role in existence, it is rather minimal and random. I am arguing from a rationalistic point of view. This mind-like reality that may be accessed by the individual does not necessarily imply a supernatural world in other words, but a world fully capable of being explained by science. As for the spiritual unity that we may experience, that does not necessarily imply a life-after-death either. But something more rational. It is now clear to me that complexity in the universe may have generated this mind that I witnessed for the last 6 years and Quantum Consciousness falls within the realm of the explainable.
Reply
#14
Ostronomos Offline
Yazata,

If consciousness can be generated by anything then it necessarily follows that the universe may generate its own consciousness through self-referencing. And this would be correct.
Reply




Users browsing this thread: 1 Guest(s)