Is consciousness a battle between beliefs & perceptions?

#1
NOTE: This seems to only be yet another potential way of detecting phenomenal consciousness in an organism and the implementation of such experiences in AI. It posits no precursor properties of matter for systemic manipulation that would offer a deep explanation of how these manifestations could incrementally arise, that would dispel the appearance of algorithmic conjuring or summoning (i.e., dualism) of those extrospective and introspective experiences.

https://aeon.co/ideas/is-consciousness-a...erceptions

EXCERPT: . . . As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. [...] In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

[...] The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either.... (MORE - details)
Reply
#2
I would be apt to agree that the crux of consciousness is likely discrimination (the ability to make a choice). I've long said that the story of Eden is an analogy for humans gaining the discrimination of right and wrong. Don't tell the faux-tolerant left that their existence crucially hinges in discrimination. Maybe that's why some of them are antinatalists.

I would be willing to bet that it's the other way around though. A multitasking system, e.g. "recycled for different uses", probably didn't become discriminating after the development of multiple uses for the same neurons. Otherwise there would have likely been a maladaptive period where memory and perception were completely indistinguishable, which wouldn't likely aid survival or promote sexual selection.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  The battle to define mental illness Magical Realist 5 277 Feb 4, 2018 11:13 PM
Last Post: C C
  Bilingualism changes children's beliefs C C 0 404 Jan 14, 2015 05:20 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)