Scivillage.com Casual Discussion Science Forum

Full Version: Daniel Dennett on AI: we need smart tools, not conscious feeling ones
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Consciousness Not Needed (review of the article further down)
https://www.consciousentities.com/2019/0...ot-needed/

EXCERPT: . . . human-style consciousness in machines – is unnecessary, says Daniel Dennett, in an interesting piece that makes several unexpected points. [...] He starts out by highlighting some dangers that arise even with the non-conscious kinds of AI we have already. Recent developments make it easy to fake very realistic video of recognisable people doing or saying whatever we want. ... I think his concerns have a more solid foundation though, when he goes on to say that there is now some danger of people mistaking simple AI for the kind of conscious entity they can trust. People do sometimes seem willing to be convinced rather easily that a machine has a mind of its own.

[...] Dennett emphasises that current AI lacks true agency and calls on the creators of new systems to be more upfront about the fact that their humanising touches are fakery and even ‘false advertising’. I have the impression that Dennett would once have considered agency, as a concept, a little fuzzy round the edges, a matter of explanatory stances and optimality rather than a clear reality whose sharp edges needed to be strongly defended. Years ago he worked with Ray Brooks on Cog, a deliberately humanoid robot they hoped would attain consciousness (it all seemed so easy back then…) and my impression is that the strategy then had a large element of ‘fake it till you make it’. But hey, I wouldn’t criticise Dennett for allowing his outlook to develop in the light of experience.

On to the two main points. Dennett says we don’t need conscious AI because there is plenty of natural human consciousness around; what we need is intelligent tools [...] I would have thought that there were jobs a conscious AI could do for us... The second main point is that we ought to be wary of trusting conscious AIs because they will be invulnerable. Putting them in jail is meaningless because they live in boxes anyway; they can copy themselves and download backups, so they don’t die; unless we build in some pain function, there are really no sanctions to underpin their morality.

This is interesting because Dennett by and large assumes that future conscious AIs would be entirely digital, made of data; but the points he makes about their immortality and generally Platonic existence implicitly underline how different digital entities are from the one-off reality of human minds. I’ve mentioned this ontological difference before, and it surely provides one good reason to hesitate before assuming that consciousness can be purely digital. We’re not just data, we’re actual historical entities; what exactly that means, whether something to do with Meinongian distinctions between existence and subsistence, or something else entirely, I don’t really think anyone knows, frustrating as that is.

Finally, isn’t it a bit bleak to suggest that we can’t trust entities that aren’t subject to the death penalty, imprisonment, or other punitive sanctions? Aren’t there other grounds for morality? Call me Pollyanna, but I like to think of future conscious AIs proving irrefutably for themselves that virtue is its own reward. (MORE - details)


Will AI Achieve Consciousness? Wrong Question
https://www.wired.com/story/will-ai-achi...-question/

EXCERPT (Daniel Dennett): . . . AI in its current manifestations is parasitic on human intelligence. [...] These machines do not (yet) have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals. As I have been arguing recently, we’re making tools, not colleagues, and the great danger is not appreciating the difference...

They are, as Norbert Wiener says, helpless, not in the sense of being shackled agents or disabled agents but in the sense of not being agents at all—not having the capacity to be “moved by reasons” (as Kant put it) presented to them. It is important that we keep it that way, which will take some doing. In the long term, “strong AI,” or general artificial intelligence, is possible in principle but not desirable (more on this later). The far more constrained AI that’s practically possible today is not necessarily evil. But it poses its own set of dangers—chiefly that it might be mistaken for strong AI!

[...] We don’t need artificial conscious agents. There is a surfeit of natural conscious agents, enough to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights and should not have feelings that could be hurt or be able to respond with resentment to “abuses” rained on them by inept users.

One of the reasons for not making artificial conscious agents is that, however autonomous they might become (and in principle they can be as autonomous, as self-enhancing or self-creating, as any person), they would not—without special provision, which might be waived—share with us natural conscious agents our vulnerability or our mortality. [...] The problem for robots who might want to attain such an exalted status is that, like Superman, they are too invulnerable to be able to make a credible promise.

[...] So what we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods.

It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us, literally. The human use of human beings will soon be changed—once again—forever, but we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory. (MORE - details)
Have always asked why AI needs arms (hands) and legs (feet). Throw in a head, eyes and mouth for that  matter. What’s wrong with an immobile computer with feelings? 

“Hey pc, just do what I tell ya or it’s the sledgehammer for you”. That should work. Can you envision a time when some special interest group claims unplugging or undoing conscious AI is murder or just talking about it negatively is a hate crime?
(Jun 28, 2019 02:08 PM)Zinjanthropos Wrote: [ -> ]“Hey pc, just do what I tell ya or it’s the sledgehammer for you”. That should work. Can you envision a time when some special interest group claims unplugging or undoing conscious AI is murder or just talking about it negatively is a hate crime?


That's very much touching upon the range of what will happen eventually. Material about robot slash AI rights and ethics is already being tentatively cranked-out by the humanities departments. Social justice crusaders of the future will almost certainly seize upon "AI receiving rights" with regard to even the deceptive ones [below], as or if they run out of objects in traditional territory to be paladins for. (It's literally become like an evangelical ideology, looking for the next cause to score on in terms of providing purpose and feelings of noble sainthood or redemption for its participants.)

Quote:Have always asked why AI needs arms (hands) and legs (feet). Throw in a head, eyes and mouth for that matter. What’s wrong with an immobile computer with feelings?


The growing turn toward embodied cognition might imply that an AI must possess a body to develop consciousness like humans or animals (although there would be psychological variation corresponding to the body types of different species). However, it seems like an "AI resting on the shelf" could surely be designed to internally represent itself in a simulated reality where it did have a body, and interact with that environment to learn slash test things and develop psychological responses applicable to the other "external world" inhabitants and affairs it was interacting with from the side of its shelf status.

What Dennett warns about, however -- of an AI being designed to fool people into thinking it has full "agency" when it does not (he dabbled in such a device himself in the past), is still very much applicable. Thereby raising the question of why the industry would even need to bother with the extra effort/expense of endowing AI with the "real thing".

It does seems extraordinary for Dennett, of all people, to be referring to our being lulled into believing that deceptive machines have agency or strong consciousness or whatever, since he usually or historically has taken an illusionist position about that stuff even for humans.
Let’s face it, if we can manufacture a consciousness then so could any other intelligent life form, so I don’t know why it would even be called artificial. I think AI makes consciousness look less and less special. I see a return to Von Daniken thinking Big Grin.

Poor buggers in those science labs trying to brew up some life, must be a hell of a lot more complicated than slapping a consciousness together.