Tononi’s "integrated information theory" might solve neuroscience’s biggest puzzle


EXCERPT: . . . When you or I perform an action, our minds are filled with a complex conscious experience. We can’t just assume that this is also true for other animals, however – particularly ones with such different brains from our own. It’s perfectly feasible – some scientists would even argue that it’s likely – that a creature like a lobster lacks any kind of internal experience, compared to the rich world inside our head.

“With a dog, who behaves quite a lot like us, who is in a body which is not too different from ours, and who has a brain that is not too different from ours, it’s much more plausible that it sees things and hears things very much like we do, than to say that it is completely ‘dark inside’, so to speak,” says Giulio Tononi, a neuroscientist at the University of Wisconsin-Madison. “But when it comes down to a lobster, all bets are off.”

The question of whether other brains – quite alien to our own – are capable of awareness, is just one of the many conundrums that arise when scientists start thinking about consciousness. When does an awareness of our own being first emerge in the brain? Why does it feel the way it does? And will computers ever be able to achieve the same internal life?

Tononi may have a solution to these puzzles. His "integrated information theory" is one of the most exciting theories of consciousness to have emerged over the last few years, and although it is not yet proven, it provides some testable hypotheses that may soon give a definitive answer.

[...] It begins with a set of axioms that define what consciousness actually is. Tononi proposes that any conscious experience needs to be structured, for instance ... It’s also specific and "differentiated" – each experience will be different depending on the particular circumstances ... And it is integrated. If you look at a red book on a table, its shape and colour and location – although initially processed separately in the brain – are all held together at once in a single conscious experience. We even combine information from many different senses ... into a single sense of the here and now. From these axioms, Tononi proposes that we can identify a person’s (or an animal’s, or even a computer’s) consciousness from the level of “information integration” that is possible in the brain (or CPU). According to his theory, the more information that is shared and processed between many different components to contribute to that single experience, then the higher the level of consciousness.

[...] It would also answer some long-standing questions about artificial intelligence. Tononi argues that the basic architecture of the computers we have today – made from networks of transistors – preclude the necessary level of information integration that is necessary for consciousness. So even if they can be programmed to behave like a human, they would never have our rich internal life.

“There is a sense, according to some, that sooner rather than later computers may be cognitively as good as we are...” says Tononi. “But if integrated information theory is correct, computers could behave exactly like you and me – indeed you might [even] be able to have a conversation with them that is as rewarding ... and yet there would literally be nobody there.” Again, it comes down to that question of whether intelligent behaviour has to arise from consciousness – and Tononi’s theory would suggest it’s not.

He emphasises this is not just a question of computational power, or the kind of software that is used. “The physical architecture is always more or less the same, and that is always not at all conducive to consciousness.” So thankfully, the kind of moral dilemmas seen in series like Humans and Westworld may never become a reality.

[...] Although the concept of “group consciousness” may seem like a stretch, Thomas Malone thinks that Tononi’s theory might help us to understand how large bodies of people sometimes begin to think, feel, remember, decide, and react as one entity. ... Tononi’s theory may help us to understand ‘minds’ that are very alien to our own. (MORE - details)
Anyone that goes for lines like:-
Quote:But are they [lobsters] actually “aware” of the sensation [of being boiled alive]? Or is that response [apparent agony] merely a reflex?
is going to annoy me. I have been told that you have to put the lid on the pan to stop the lobster jumping out of the pan. Seeking a means of escape requires intelligence - it is [far more] than a 'reflex'.

So could you ever have billions of nodes [of computers] all connected together? Not according to Tononi - he describes the internet as an impossible thing.

I'd say (without evidence) that 'consciousness' is is a natural (evolutionary) consequence of trying to stay one step ahead of the game. Anything that sees a predator in the distance without dealing with the 'predator here' possibility is unlikely to live very long. There's few things that can spend all their lives hiding under a rock so most things have to do a risk assessment most (if not all) of the time when they aren't hiding under rocks. I think humans can spook most animals - humans are quite new on the block and most animals don't have either the evolutionary or individual background to know if we're harmless or a one way trip to being boiled in a bag. I don't deny that humans have worked out ways of catching lobsters faster than lobsters have learned to avoid those traps, For the present Tononi (as a human) can claim to be smarter than a lobster - I'll give him that much.

Cutting back to the chase - to create artificial intelligence the technology already exists (though Tononi is in denial) and Tononi (the hero of the piece) has no idea how to start or where. When encountering people who appear to know nothing about anything one must accept the possibility that they actually know nothing about anything.

Possibly Related Threads…
Thread Author Replies Views Last Post
  Neuroscience readies for a showdown over consciousness ideas C C 1 145 Mar 17, 2019 06:20 AM
Last Post: Syne
  Growing Neanderthal brains + Can Integrated Information Theory explain consciousnes? C C 3 348 May 17, 2018 10:09 PM
Last Post: Ostronomos
  How Julian Jaynes’ famous 1970s theory is faring in the neuroscience age C C 0 193 Nov 11, 2017 05:02 AM
Last Post: C C
  Neuroscience study supports Kant's 200-year old art theory C C 0 193 Sep 19, 2016 12:49 AM
Last Post: C C
  Neuroscience and the premature death of the soul C C 2 695 Feb 29, 2016 08:46 PM
Last Post: Yazata

Users browsing this thread: 1 Guest(s)