Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

The robots won't take over because they couldn't care less

#1
C C Offline
https://aeon.co/essays/the-robots-wont-t...-care-less

EXCERPT: . . . Does all this give us reasons for optimism when we look to the automated factories and offices of the future? Talk of human-AI cooperation is usually seen as ‘good news’. Perhaps collaboration between people and goal-seeking computers is not only possible in principle, but also – if put into practice – satisfying for the people involved, because they would benefit from participation in a shared enterprise. Will we be able to share with our AI ‘colleagues’ in jokes over coffee, in the banter between rival football fans, in the arguments about the news headlines, in the small triumphs of standing up to a sarcastic or bullying boss?

No – because computers don’t have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.) Besides this, an AI program’s ‘goals’, ‘priorities’ and ‘values’ don’t matter to the system. When DeepMind’s AlphaGo beat the world champion Lee Sedol in 2016, it felt no satisfaction, still less exultation. And when the then-reigning chess program Stockfish 8 was trounced by AlphaZero a year later (even though AlphaZero had been given no data or advice about how humans play), it wasn’t beset by disappointment or humiliation. Garry Kasparov, by contrast, was devastated when he was beaten at chess by IBM’s Deep Blue in 1997.

So if you were to succeed in working with some clever AI system – as Kasparov can today, as the biological half of a human-AI chess ‘centaur’– you couldn’t celebrate that success together. You couldn’t even share the minor satisfactions, excitements and disappointments along the way. You’d be drinking your Champagne alone. You’d have a job – but you’d miss out on job satisfaction.

Moreover, it makes no sense to imagine that future AI might have needs. They don’t need sociality or respect in order to work well. A program either works, or it doesn’t. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems – that is, living organisms. They can’t sensibly be ascribed to artefacts.

Some AI scientists disagree. Steve Omohundro, for instance, argues that any sophisticated AI system would develop ‘drives’ – such as resisting being turned off, trying to make copies of itself, and trying to gain control of resources no matter what other systems (living or not) might be harmed thereby. These, he says, would not have to be programmed in. They would develop ‘because of the intrinsic nature of goal-driven systems’: any such system will be ‘highly motivated’ to discover ways of self-improvement that enable its goals to be achieved more effectively. Such drives (potentially catastrophic for humanity) would inevitably develop unless future AI systems were ‘carefully designed [by us] to prevent them from behaving in harmful ways’.

However, Omohundro’s argument begs the question at issue here. He assumes that (some) AI systems can be ‘highly motivated’, that they can care about their own preservation and about achieving their various goals. Indeed, he takes it for granted that they can have goals, in the same (caring) sense that we do.

But the discussion above, about the relation between needs and goals, reinforces the claim that computers – which can’t have needs (and whose material existence isn’t governed by the FEP) – can’t really have goals, either. Since striving (or actively seeking) is essential to the concept of need, and all our intentions are underpinned by our needs, human goals always involve some degree of caring. That’s why achieving them is inherently satisfying. A computer’s ‘goals’, by contrast, are empty of feeling. An AI planning program, no matter how nit-picking it might be, is literally care-less.

Similarly, even the most ‘friendly’ AI is, intrinsically, value-less. When AI teams talk of aligning their program’s ‘values’ with ours, they should not be taken as speaking literally. That’s good news, given the increasingly common fear that ‘The robots will take over!’ The truth is that they certainly won’t want to....

MORE: https://aeon.co/essays/the-robots-wont-t...-care-less
Reply
#2
Zinjanthropos Offline
I wonder how eager a robot/AI with a so called consciousness would be if his consciousness was to be replaced by a new improved model, even if it were other robots doing the upgrade? Would it be willing to give it up and trust fellow robots?
Reply
#3
RainbowUnicorn Offline
from my very laymans grasp of what the computer scientists say.
there is no REAL self aware AI. thus everything is purely a command based process.

what i wonder playing the scar tatctics role. what happens when the computer(its not AI yet) gathers all the information it can.
will it continue to try and gather more information and how will it do that ?
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article US military plans to unleash thousands of autonomous war robots over next two years C C 0 78 Aug 31, 2023 04:10 PM
Last Post: C C
  Article Unpredictable abilities emerge from large AI models + Could GPT-4 take over world? C C 1 124 Mar 18, 2023 08:12 AM
Last Post: Kornee
  Will transformers take over artificial intelligence? C C 0 79 Mar 11, 2022 07:24 PM
Last Post: C C
  Will AI take over? Unpredictablility suggests otherwise C C 0 181 Jan 8, 2020 08:41 AM
Last Post: C C
  How do you teach a car that a snowman won’t walk across the road? C C 0 245 Jun 4, 2019 07:29 AM
Last Post: C C
  Is a robot just another animal? + Socio-ethical impact of embodied AI in health care C C 0 276 May 16, 2019 04:44 AM
Last Post: C C
  Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI C C 0 402 May 25, 2018 09:06 PM
Last Post: C C
  Robots may be able to lift, drive, and chat, but are they safe and trustworthy? C C 0 466 Apr 27, 2016 06:52 PM
Last Post: C C
  Walter Pitts rose from the streets to MIT, but couldn’t escape himself C C 0 607 Feb 9, 2015 08:52 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)