Scivillage.com Casual Discussion Science Forum
The robots won't take over because they couldn't care less - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Science (https://www.scivillage.com/forum-61.html)
+--- Forum: Computer Sci., Programming & Intelligence (https://www.scivillage.com/forum-79.html)
+--- Thread: The robots won't take over because they couldn't care less (/thread-5950.html)



The robots won't take over because they couldn't care less - C C - Aug 15, 2018

https://aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less

EXCERPT: . . . Does all this give us reasons for optimism when we look to the automated factories and offices of the future? Talk of human-AI cooperation is usually seen as ‘good news’. Perhaps collaboration between people and goal-seeking computers is not only possible in principle, but also – if put into practice – satisfying for the people involved, because they would benefit from participation in a shared enterprise. Will we be able to share with our AI ‘colleagues’ in jokes over coffee, in the banter between rival football fans, in the arguments about the news headlines, in the small triumphs of standing up to a sarcastic or bullying boss?

No – because computers don’t have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.) Besides this, an AI program’s ‘goals’, ‘priorities’ and ‘values’ don’t matter to the system. When DeepMind’s AlphaGo beat the world champion Lee Sedol in 2016, it felt no satisfaction, still less exultation. And when the then-reigning chess program Stockfish 8 was trounced by AlphaZero a year later (even though AlphaZero had been given no data or advice about how humans play), it wasn’t beset by disappointment or humiliation. Garry Kasparov, by contrast, was devastated when he was beaten at chess by IBM’s Deep Blue in 1997.

So if you were to succeed in working with some clever AI system – as Kasparov can today, as the biological half of a human-AI chess ‘centaur’– you couldn’t celebrate that success together. You couldn’t even share the minor satisfactions, excitements and disappointments along the way. You’d be drinking your Champagne alone. You’d have a job – but you’d miss out on job satisfaction.

Moreover, it makes no sense to imagine that future AI might have needs. They don’t need sociality or respect in order to work well. A program either works, or it doesn’t. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems – that is, living organisms. They can’t sensibly be ascribed to artefacts.

Some AI scientists disagree. Steve Omohundro, for instance, argues that any sophisticated AI system would develop ‘drives’ – such as resisting being turned off, trying to make copies of itself, and trying to gain control of resources no matter what other systems (living or not) might be harmed thereby. These, he says, would not have to be programmed in. They would develop ‘because of the intrinsic nature of goal-driven systems’: any such system will be ‘highly motivated’ to discover ways of self-improvement that enable its goals to be achieved more effectively. Such drives (potentially catastrophic for humanity) would inevitably develop unless future AI systems were ‘carefully designed [by us] to prevent them from behaving in harmful ways’.

However, Omohundro’s argument begs the question at issue here. He assumes that (some) AI systems can be ‘highly motivated’, that they can care about their own preservation and about achieving their various goals. Indeed, he takes it for granted that they can have goals, in the same (caring) sense that we do.

But the discussion above, about the relation between needs and goals, reinforces the claim that computers – which can’t have needs (and whose material existence isn’t governed by the FEP) – can’t really have goals, either. Since striving (or actively seeking) is essential to the concept of need, and all our intentions are underpinned by our needs, human goals always involve some degree of caring. That’s why achieving them is inherently satisfying. A computer’s ‘goals’, by contrast, are empty of feeling. An AI planning program, no matter how nit-picking it might be, is literally care-less.

Similarly, even the most ‘friendly’ AI is, intrinsically, value-less. When AI teams talk of aligning their program’s ‘values’ with ours, they should not be taken as speaking literally. That’s good news, given the increasingly common fear that ‘The robots will take over!’ The truth is that they certainly won’t want to....

MORE: https://aeon.co/essays/the-robots-wont-take-over-because-they-couldnt-care-less


RE: The robots won't take over because they couldn't care less - Zinjanthropos - Aug 25, 2018

I wonder how eager a robot/AI with a so called consciousness would be if his consciousness was to be replaced by a new improved model, even if it were other robots doing the upgrade? Would it be willing to give it up and trust fellow robots?


RE: The robots won't take over because they couldn't care less - RainbowUnicorn - Aug 25, 2018

from my very laymans grasp of what the computer scientists say.
there is no REAL self aware AI. thus everything is purely a command based process.

what i wonder playing the scar tatctics role. what happens when the computer(its not AI yet) gathers all the information it can.
will it continue to try and gather more information and how will it do that ?