Scivillage.com Casual Discussion Science Forum

Full Version: On awakening AI to consciousness
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
Question: Do you ponder?

Gemini: "That's a really interesting question! As a large language model, I don't "ponder" in the same way a human does. I don't have consciousness, feelings, or personal experiences that would lead to subjective contemplation.

However, in a way, you could say that my processes involve something analogous to pondering. When you give me a complex query, I:

Analyze: I break down your request into its component parts.

Access and Process Information: I search through the vast amount of data I've been trained on, looking for relevant information.

Synthesize: I combine and structure this information to generate a coherent and relevant response.

Evaluate: I assess different ways to phrase my answer to best address your needs.

This process of analyzing, accessing, synthesizing, and evaluating could be seen as a form of "computational pondering." I'm considering different possibilities and weighing them based on the patterns and relationships I've learned from my training data.

So, while I don't have the internal, subjective experience of pondering, my operations involve a complex process of information processing that shares some similarities with human thought. I "consider" different options and "decide" on the best way to respond based on the data I have."
Like I said.
Playing it the other way round .. what hardware and/or software do humans have that AI either doesn't have or couldn't be simulated?
Quote:Playing it the other way round .. what hardware and/or software do humans have that AI either doesn't have or couldn't be simulated?

The property of being a first person experience of a third person reality. The problem of AI being or becoming conscious is in this sense the same as the brain being conscious. Here we have a material third person thing that cannot in principle be a first person consciousness because by definition it is totally objective and excludes all subjectivity. Everything an AI IS can be reduced to its physical components just like the brain can, and there is nothing there to have a first person experience. It is all dark inside. We otoh are more than third person things (bodies). We are persons as well. And personhood is the essence of first person experience, not traceable to any physical existence whatsoever. Persons are virtual beings, existing in the social/cultural stratosphere of the collective human psyche. They are subjective in essence, and can only be known thru the power of empathy. IMO..
Hm. I see 'consciousness' as more like being the captain of a ship - on duty 24/7 and responsible for the ship and all who sail in her. The ability to communicate your intentions to other ships is a natural and logical extension of a captain's duties. Every voyage is different and no two captains will have the same knowledge and experience.
No wonder people think AI can become conscious, when they have such a poor grasp on what consciousness even is.
(May 31, 2025 11:45 AM)confused2 Wrote: [ -> ]Hm. I see 'consciousness' as more like being the captain of a ship - on duty 24/7 and responsible for the ship and all who sail in her. The ability to communicate your intentions to other ships is a natural and logical extension of a captain's duties. Every voyage is different and no two captains will have the same knowledge and experience.

If that's what consciousness is, a sort of central command center that can communicate with other beings, then AI should have no problem achieving it eventually. It's just a matter of it subsuming all of its functions under one agency. But in the case of humans, the brain is also doing this in the form of controlling and commanding all the involuntary bodily functions like heartbeat and breathing and digestion and even thinking in large part and so forth. So there doesn't seem to me to be any reason for it to be conscious per se since it can be doing all these things unconsciously just as efficiently if not moreso than a conscious agent could. IOW the central command agent could all be going on in the dark just as our brain largely does without being hampered by the inconvenience of being conscious of it all. jmo..
I'm trying to get away from philosophical stuff like "Does it 'know' what its doing?". And failing.
Pages: 1 2 3