Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

The problem of meaning in AI + The problem of AI consciousness

#1
C C Offline
The problem of meaning in artificial intelligence
http://www.ashishdalela.com/2017/01/15/p...elligence/

EXCERPT: Since the 1960s, when computers first appeared, a machine that can think just like humans was claimed to be just a few years away. This idea has been called Artificial Intelligence (AI) and it reappears every few years in a new form, the latest being the brouhaha around “Machine Learning”, “Deep Learning”, etc. The algorithms and techniques underlying these trends have existed for a few decades, and their limitations are also well-known. However, even with growing computational power we are only able to get closer to the boundaries of what is possible, rather than cross into what is impossible. This post discusses the problems which cannot be solved by AI in its current form and discusses the reasons why. It also discusses the changes that are needed in physical and mathematical theories to make AI a reality. Aside from the shifts in material thinking, a separation between matter and choice is also needed....



The problem of AI consciousness
http://m.huffpost.com/us/entry/9502790

EXCERPT: A superintelligent AI [...] being made of a different substrate, would it have conscious experience? Could it feel the burning of curiosity, or the pangs of grief? Let us call this "the problem of AI consciousness."

If silicon cannot be the basis for consciousness, then superintelligent machines -- machines that may outmode us or even supplant us -- may exhibit superior intelligence, but they will lack inner experience. [...]

In an extreme [...] humans upload their brains, or slowly replace the parts of their brains underlying consciousness with silicon chips, and in the end, only non-human animals remain to experience the world. This would be an unfathomable loss. Even the slightest chance that this could happen should give us reason to think carefully about AI consciousness.

The philosopher David Chalmers has posed "the hard problem of consciousness," asking: why does all this information processing need to feel a certain way to us, from the inside? [...] In contrast, the problem of AI consciousness asks whether AI, being silicon-based, is even capable of consciousness. It does not presuppose that AI is conscious - that is the question. These are different problems, but they are both problems that science alone cannot answer.

I used to view the problem of AI consciousness as having an easy solution. Cognitive science holds that the brain is an information-processing system and that all mental functions are computations. Given this, it would seem that AIs can be conscious [...] I now suspect the issue is more complex, however. [...]

First, a superintelligent AI may bypass consciousness altogether. [...] Only a very small percentage of our mental processing is conscious at any given time. A superintelligence would surpass expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could encompass the entire internet. It may not need the very mental faculties that are associated with conscious experience in humans. Consciousness could be outmoded.

Second, consciousness may be limited to carbon substrates only. [...] This difference has important implications in the field of astrobiology, because it is for this reason that carbon, and not silicon, is said to be well-suited for the development of life throughout the universe.

[...] These two considerations suggest that we should regard the problem of AI consciousness as an open question....
Reply
#2
Syne Offline
Yeah, I don't generally find the technological singularity to be very credible.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question The meaning of DDoS stryder 0 548 Apr 30, 2015 03:58 AM
Last Post: stryder



Users browsing this thread: 1 Guest(s)