https://www.rt.com/news/410952-robot-cit...lligence-/
How can AI achieve consciousness?
World's 1st robot citizen wants her own family, career, etc |
''Sophia’s creator David Hanson says the 19-month-old robot, which was awarded Saudi citizenship last month, could achieve consciousness within the next few years.''
https://www.rt.com/news/410952-robot-cit...lligence-/ How can AI achieve consciousness?
It's just fluff for the most part currently.
The first "Conscious" robot will likely be made by the military. The "Consciousness" itself would actually be a security practice, developing self-encrypting code that operates at the operating level so that any and all processes occur within that crypto-shielding. Should such a system shutdown, reboot or have an attempted infiltration, it can "die" causing a kind of self-destruction of all processes within the framework. Rebooting it would only likely be possible with a Bootstrapping method where an external interface acts as a "consciousness" catalyst. This would allow for smarter self-operating systems out in the field that can't necessarily be turned against their creators by an enemy. The concern of such a system though is really down to how that format of consciousness wouldn't necessarily equate ethics or morality, so can be the fear of people as to what it's purpose currently would be and what it's intentions might "evolve" to if created wrongly. (Nov 26, 2017 04:17 AM)Leigha Wrote: How can AI achieve consciousness? Daniel Dennett: "The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves." (Consciousness in Human and Robot Minds) As Dennett further contends in the section further down, about Christof Koch... There isn't anything "extra" to pain, feelings, emotions, and the content of thought and the various senses. Other than the outward "awareness" behavior of a body, the personal obsessions with such spectres as expressed by language and the verbal reports to others, the diagrammed configuration of inputs and outputs in the head, and whatever else is covered by functionalism. What exists in the skull is the same "stuff" as elsewhere at an atom / particle stratum, it just has unusual procedure-mediated concepts like self / subject, survival, memories and habits associated with its higher-level activity and special arrangement.So if a sophisticated robot (via its successfully executed tasks and recognition of objects, navigation of the surrounding environment, relating of stories, and general emotional behavior) eventually convinces a majority of human observers that it is conscious thanks to those external affairs, then it is conscious. And it really doesn't take much to sway the average person in that regard: The Secret of Consciousness, with Daniel Dennett: Although Cog the robot didn’t become conscious, Dennett argues that in principle it could have. “One of the interesting things of the Cog project was that it showed – well, it certainly showed me – how easy it is to impress people with the apparent consciousness of a robot.” The team invited someone to shake hands with Cog and she did. And she screamed. “It didn’t feel like shaking hands with a power tool, it felt like shaking hands with a live actor who had a chainmail glove on or something like that... she was shocked at the way Cog’s hand moved and the way his eyes would respond. Very disconcerting.” Illusionism (PDF), Dennett: The key for me lies in the everyday, non-philosophical meaning of the word illusionist. An illusionist is an expert in sleight of hand and the other devious methods of stage magic. We philosophical illusionists are also illusionists in the everyday sense -- or should be. That is, our burden is to figure out and explain how the ‘magic’ is done. As [Keith] Frankish says: Plus, renaming the "hard problem" the "illusion problem" doesn't really change much. Any solution to the latter in terms of a natural agency like electromagnetism (etc) would also be a solution to the former, since the illusion fundamentally consists of "something there" rather "not even nothingness", which is similarly what the puzzle of experience is (when the deceptive "trick" is called that). Dennett's attributed emphasis below on there being no detectable phenomenal properties in the brain (that's why he asserts that pain, feelings, and manifestations are not "real" in the qualitative sense) is hardly news to his philosophical rivals. That's the magnitude of the challenge -- why Dennett's rivals appeal instead to epiphenomenalism, double-aspectism, panexperientialism and so forth in a natural context (i.e., it seems like extreme skepticism to deny our feelings / showings -- the rest of the world and our own thoughts which they represent could consequently be denied as well). Book Review: Dennett’s favourite argument against qualia takes as evidence what happens when we stare at an inverted blue, yellow and black version of the American flag, and it is replaced by a white screen. An after-image of the flag seems to appear in its red, white and blue version. But as Dennett says, “there are no red stripes on the page, on your retina, or in your brain. In fact, there is no red stripe anywhere.” There is no thing at all, which means there are no qualia. What is deemed a "hard problem" for philosophy thereby wouldn't even seem to be an item which science could address methodologically (i.e., pursuing an explanation for what science can't even confirm as "existing" or "occurring" to begin with).Like the neural correlates of his rivals, Dennett ironically seemed to suggest elsewhere that the view of it as an "illusion / trick" would still have to be explored deeper and resolved by researchers -- that the superficial configuration of inputs / outputs which philosophers of functionalism play with may not suffice. Does this mean that Dennett is denying, preposterously, that there is anything it is like to be as a human? It seems clear that Dennett is not saying this, as he insists that “it is like something to be you,” and that “not only are colours real but also consciousness, free will, and dollars.” [But what Dennett is referring to is an abstract account of those affairs in terms of functionalism and science -- not as the experiences of everyday awareness or life. Thus skirting any inconsistency if he jitters back and forth from "not real as this" but "real as that" (albeit still confusing to many of us).] But subjectively its private appearance differs radically from its "objective", public appearance (i.e., internally it's a simulation of a local part of the world and personal thoughts / memories / speculations). An unavoidable duality or two-sided coin. The qualitative character of the organ only disappearing at the abstract level of electrochemical interactions and subatomic physics.. - - -
Never hear anyone talk about this ......
Are there fewer ways to murder(?) a conscious AI than a human? After all if I remove its batteries it isn't necessarily dead, just in stasis, ready to come back to life. The only way I can figure is to destroy the piece of technical ingenuity that allows AI to be conscious or aware, be it a chip, program, hard drive etc. I guess the first thing we'd have to decide right after declaring a machine aware, is whether it is murder to deliberately kill a conscious being. Is it morally just to send conscious machines to fight wars, to destroy one another or even kill humans? How would that be accomplished if machines had the ability to choose not to? If they are true conscious machines then I suppose like us, they'd have to be programmed(convinced) to fight. Not sure what this says about the free will of conscious machines, or humans for that matter. Support groups for AI, counselling, mental hospitals, service dogs(mechanical variety). Psychology is about to take a giant leap.
Such interesting responses! I have always felt that consciousness is not something that is quantifiable, therefore, science can't ''duplicate'' it or ''clone'' it, for lack of better words. I don't think that consciousness can be programmed, but AI programs could be created to give an illusion of consciousness, from the outside looking in. A robot could seem to be acting on its own, but it never will be. It will have a creator, it will always be nothing more than a series of programs that ''give it life.'' I disagree with the idea that humans and AI are comparable, when it comes to consciousness. Again, illusions aren't facts. Perceptions don't always equate to reality. But, it's definitely fascinating to see where this is all heading.
(Nov 26, 2017 03:09 PM)Zinjanthropos Wrote: [...] Is it morally just to send conscious machines to fight wars, to destroy one another or even kill humans? How would that be accomplished if machines had the ability to choose not to? If they are true conscious machines then I suppose like us, they'd have to be programmed(convinced) to fight. Not sure what this says about the free will of conscious machines, or humans for that matter. If they're allowed to be fully autonomous warriors someday, Isaac Asimov's robotic laws will be violated in the context of warfare. Exception would be if there are international treaties that constrain them to only battling each other. Which rogue states would undermine, anyway, and the damn would finally break. ("I've got to suppress or wipe-out those rebels that are challenging my rule.") Quote:Support groups for AI, counselling, mental hospitals, service dogs(mechanical variety). Psychology is about to take a giant leap. If robots eventually qualify for citizenship or personhood (a careless trend already prematurely starting for nation-states trying to superficially get a progressive jump on their similarly pretentious competitors)... Then it would be difficult or paperwork tedious to step over their rights and fix any pathological tendencies they develop, as well maintain their servitude roles. Which resultantly would pretty much herald the slow, gradual disappearance of us non-artificial humans. Even the cyborg version of the latter would incrementally extinguish the remaining, naturally evolved fragility / vulnerabilities of the latter. - - - (Nov 26, 2017 10:05 PM)C C Wrote: Which resultantly would pretty much herald the slow, gradual disappearance of us non-artificial humans. Even the cyborg version of the latter would incrementally extinguish the remaining, naturally evolved fragility / vulnerabilities of the latter. What do people who believe in ID think of machines eventually displacing humans, I mean AI must've been in the plans?
CC... I hope that was the condensed version. All kidding aside, I appreciate not only your efforts here but your passion as well. And who doesn't see the imagination, creativity and quality of your avatar vignettes?
Not sure when but somewhere along the line Ive heard that our predilection for machines will allow us more time to think/philosophize. I'm beginning to think that's true. Maybe philosophy is seeing it as a return to the good old days except we know a lot more about the physical world. Is metaphysics losing its lustre? Is there a concerted or subconscious effort taking place by philosophers to push aside or distance themselves from theistic beliefs/metaphysics? More emphasis on the mind and consciousness having a physical component, even more so with the advent of quantum mechanics. I suppose AI will do that to you especially when you start equating a circuit board/computer chip with a mind. Only time will tell. |
« Next Oldest | Next Newest »
|
Possibly Related Threads… | |||||
Thread | Author | Replies | Views | Last Post | |
Replika AI recommends jail time for people who mistreat her | C C | 0 | 24 |
Apr 5, 2024 05:39 PM Last Post: C C |
|
Open AI GPT-4 Outperforms Most Humans on University Entrance Exams, Bar Exam etc. | Yazata | 0 | 85 |
Mar 15, 2023 05:59 AM Last Post: Yazata |
|
An FCC commissioner wants TikTok removed from Apple & Google App stores | C C | 0 | 68 |
Jun 30, 2022 07:15 AM Last Post: C C |
|
The body is the missing link for AI + Rogue robot death + The Robot Protocol | C C | 0 | 561 |
Mar 16, 2017 12:32 AM Last Post: C C |