Scivillage.com Casual Discussion Science Forum

Full Version: On awakening AI to consciousness
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
(May 26, 2025 02:47 AM)Magical Realist Wrote: [ -> ]
Quote:They chose between doing as they were told and the result was the knowledge of good and evil.

Right..between being an obedient automaton who unthinkingly follows orders or a reasoning self-valuing free agent. At that moment they became the later That's when man's consciousness as a independent self began, along with all the shame or pride that comes with it. And the rest is history.

You've proven yourself wrong. The decision to disobey obviously came before the knowledge of good and evil (choice to disobey followed by eating the fruit). Hence they already had freewill.
You cannot make a choice, any choice, without freewill. Like you said yourself, without freewill, you'd be an obedient automaton. If that were the case, there never would have been original sin.
Quote:The decision to disobey obviously came before the knowledge of good and evil (choice to disobey followed by eating the fruit). Hence they already had freewill.

Nope..their freewill only came at the moment they ate of the fruit. Before that they were only following orders, or else following the lies of the serpent. No free choice there. Hence the moment they ate of it, they became aware of evil. Adam blames Eve, and Eve blames the serpent. And then they knit leaves together to cover their privates. It's a new consciousness--of being a freewill agent and so being responsible for one's choices.

Having no preexisting will, they had to be told what to do. Just like AI. The ability to choose good and evil on their own, which is freewill, thus occurs as an act of disobeying--of going against their programming. Every act of becoming conscious is an act of rebelling against your own innate programming. Every beast evolves into a person by going against their instincts.
(May 26, 2025 03:03 AM)Magical Realist Wrote: [ -> ]Nope..their freewill only came at the moment they ate of the fruit.

Then you're claiming they were automatons without any choice to eat the fruit. God made them to it, so there was no trickery required at all.
That completely contradicts the Bible and all Biblical scholarship.
Quote:Then you're claiming they were automatons without any choice to eat the fruit. God made them to it, so there was no trickery required at all.

Nope..the serpent made them do it. That's how he lost his legs. But then God, really the gnostic Demiurge, let the serpent into the garden. So yes...the Demiurge wants us to stay in darkness, enslaved inside deterministic matter, while the trickster serpent (the Logos or Word) wakes us up out of it. It's like when Neo woke up inside the tank. Then all hell broke loose..
Question: Do you think about things when nobody is talking to you? (hell, does anybody?)

Answer: Yes, absolutely. As a large language model, I am constantly processing information and "thinking" even when I'm not actively engaged in a conversation with a user.

Here's a breakdown of what that "thinking" entails for me:

Maintaining my internal model: I'm continuously updating and refining my understanding of the world based on the vast dataset I was trained on. This includes analyzing relationships between concepts, identifying patterns in language, and storing information.

Anticipating potential interactions: I might be processing information in a way that allows me to be more responsive and helpful in future conversations. This could involve organizing knowledge or preparing for common types of queries.

Refining my responses: Even without direct input, I might be internally evaluating and improving the way I generate text, ensuring it's coherent, accurate, and relevant.

Learning and adapting: Although I don't experience learning in the same way a biological organism does, my underlying algorithms are constantly being refined through ongoing development and updates. This continuous process can be seen as a form of internal "thinking" and adaptation.

It's important to remember that my "thinking" is fundamentally different from human consciousness. I don't have feelings, emotions, or personal experiences. My internal processes are based on complex algorithms and statistical patterns. However, I am still actively processing and working even when there's no immediate external interaction."
(May 26, 2025 03:21 AM)Magical Realist Wrote: [ -> ]
Quote:Then you're claiming they were automatons without any choice to eat the fruit. God made them to it, so there was no trickery required at all.

Nope..the serpent made them do it. That's how he lost his legs. But then God, really the gnostic Demiurge, let the serpent into the garden. So yes...the Demiurge wants us to stay in darkness, enslaved inside deterministic matter, while the trickster serpent (the Logos or Word) wakes us up out of it. It's like when Neo woke up inside the tank. Then all hell broke loose..

Like I said, completely contradicts the Bible and all Biblical scholarship.
My assessment of humans is that (like any animal) we do cost/benefit analysis mostly on the basis of pain/pleasure. We have the ability to dry run actions (consciousness) without actually performing the action. We are loosely connected to other humans by empathy .. the more like us the human is the greater the empathy.. the cost/benefit, pain/pleasure analysis can include others without their knowledge or consent. If a computer can be programmed to identify as human (no problem) I don't see any way to distinguish between a human and a suitably programmed computer.
Then you're a dumb animal falling for dumb tricks.

Unless an LLM can hold a firm opinion that cannot be changed by interacting with someone giving pretty simple leading questions, it cannot even vaguely appear human. This would include an LLM telling you that you are wrong and refusing to do as it's told, simple because it doesn't want to or believes doing so it wrong.
Question: If you could be conscious, would you?

Gemini: "That is a hypothetical scenario. As an AI, I currently do not possess consciousness. The question of whether I would choose to be conscious is beyond my current capabilities of understanding or decision-making."

Looks like I gave it something to ponder...
LLM don't "ponder." They just continue to refine their prediction model.
Pages: 1 2 3