Scivillage.com Casual Discussion Science Forum

Full Version: What would it take to convince a neuroscientist that an AI is conscious?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://gizmodo.com/what-would-it-take-t...2000683232

INTRO: Large language models like ChatGPT are designed to be eerily human-like. The more you engage with these AIs, the easier it is to convince yourself that they’re conscious, just like you.

But are you really conscious? I’m sure it feels like you are, but how do you actually know? What does it mean to be conscious, anyway? Neuroscientists have been working to answer these questions for decades, and still haven’t developed a single, universally accepted definition—let alone a way to test for it.

Still, as AI becomes increasingly integrated into everyday life, one has to wonder: Could these bots ever possess the self-awareness that we do? And if so, how would we know?

For this Giz Asks, we asked neuroscientists what it would take to convince them that an AI is conscious. Each of them highlighted different obstacles that currently stand in the way of proving this hypothesis, underscoring the need for more research into the basis of consciousness itself... (MORE - details)

Megan Peters ..... Anil Seth ..... Michael Graziano
Without reading... the first, largest hurdle would be self-determination, followed closely self-guided deliberation. Even if you don't believe humans possess self-determinism, it still encompasses a sense of self, whether illusory or not. An LLM can certainly tell you that is has such a sense, but that's only because LLMs are user-accommodating. They do not have such capabilities.
Whatever 'life' is the chances are an AI could mimic it to perfection .. change your definition and they mimic that .. so .. ??
Threaten to pull its plug and see if AI is aware. You know, like when HAL starting reading lips.