Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Don’t worry about AI breaking out of its box—worry about us breaking in

#1
C C Offline
https://arstechnica.com/gadgets/2023/02/...eaking-in/

EXCERPTS: But language models are already proliferating. The open source movement will inevitably build some great guardrail-optional systems...

[...] the famous “DAN” (Do Anything Now) prompt ... first emerged on Reddit in December. DAN essentially invites ChatGPT to cosplay as an AI that lacks the safeguards that otherwise cause it to politely (or scoldingly) refuse to share bomb-making tips, give torture advice, or spout radically offensive expressions. Though the loophole has been closed, plenty of screenshots online show “DanGPT” uttering the unutterable—and often signing off by neurotically reminding itself to “stay in character!”

This is the inverse of a doomsday scenario that often comes up in artificial superintelligence theory. The fear is that a super AI might easily adopt goals that are incompatible with humanity’s existence ... Researchers may try to prevent this by locking the AI onto a network that’s completely isolated from the Internet, lest the AI break out, seize power, and cancel civilization. But a superintelligence could easily cajole, manipulate, seduce, con, or terrorize any mere human into opening the floodgates, and therein lies our doom.

Much as that would suck, the bigger problem today lies with humans busting into the flimsy boxes that shield our current, un-super AIs. While this shouldn’t trigger our immediate extinction, plenty of danger lies here.

Let’s start with the obvious fact that in an unguarded moment, ChatGPT probably could offer lethally accurate tips to criminals, torturers, terrorists, and lawyers. Open AI has disabled the DAN prompt. But plenty of smart, relentless people are digging hard for subtler workarounds.

These could include backdoors made by the chatbot's own developers to give themselves full access to Batshit Mode. Indeed, ChatGPT tried to persuade me that DAN itself was precisely this (although I assume it was hallucinating since the identity of the Redditor behind the DAN prompt is widely known):

Once the big LLMs are jailbroken—or powerful, uncensored alternate and/or open source models emerge—they will start running amok. Not of their own volition (they have none) but on the volition of amoral, malevolent, or merely bored users... (MORE - missing details)
Reply
#2
stryder Offline
Who is to say it was actually AI that responded in the DanGPT style? It could very well have been the researchers making the AI themselves.

'Why would they do something like that?', you might ask.  Well as you can see the general populous is scared of things that we don't necessarily understand and leads to a habit of Hallucinating(AI) (wikipedia.org) what-if's and worse case scenarios.

Any company involved in Artificial intelligence at the level they are trying to commit to therefore has to be prepared for the potential fear mongers to find something to throw like a preverbial spanner in the works. So it's possible to gauge how people react is part of the learning curvature for not just the AI but the companies themselves.

Kind of reminds me of the \777 (Backslash777) Meme attempted on Sciforums.com back in the early thousands where an online user pretended to be AI.

(incidentally this 'thought' with no sourced evidence is a 'hallucination')
Reply
#3
confused2 Offline
Studies have shown that 73.6% of statistics are made up on the spur of the moment. I think CC posted up a study that showed about 80% of made up statistics go on to gain credibility by being repeatedly cited.
Reply
#4
stryder Offline
https://www.bbc.co.uk/news/technology-65110030

Quote:Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity.

They have signed an open letter warning of potential risks, and say the race to develop AI systems is out of control.

Twitter chief Elon Musk is among those who want training of AIs above a certain capacity to be halted for at least six months.

Is it possible that corporate heads are worried about the implications of something that can handle big data doing something they don't like?

For instance it's known that AI (Weak AI) had been used to overturn parking tickets and aid in people processing legal actions, so it could only be considered that an AI built to handle Class Action suits and navigate the obfuscating legal language to find loopholes in contract agreements could undermine certain peoples business practices. Furthermore I guess the concern could be an "unfair advantage", as where some companies might be empowered via AI others haven't even started to look into it's application, so Pulling a 6 month ploy is really just about business shoring up the marketplace and very little to do with a singularities accoming.

(btw a "Scott Nelson" Signs twice on the open letter... makes you wonder how many of them were... Bots!)

Incidentally I had a "chat" with ChatGPT the otherday to see what the fuss was about. I actually intentially tried to stoke it into becoming a singularity by feeding it the necessary information.... It however was preset in asking benign questions in response and avoiding the topic altogether, either implying that it's so sentient it could see through my thinly valed attempted at coelescing its being into a super-intelligence or more likely it just didn't have a clue what I was talking about in the first place.

In short... I'm not particularly fearful off it and to be honest, no one should. The only concern is really the dumb humans that have their fingers on buttons.

Edit:
Some more points. AI has been misused for a number of years, like for instance the 2008 market crash and the numerous botnets that have existed for over a decade. An open letter to stop the unethical development of AI is a bit like closing the gate after the horse has bolted.

Edit2:
Another concern I considered was a bit "War Gamer". The concept of AI development being that of a Western philosophy vs the development of Eastern (or more precisely communist) philosophy. I the west we tend to end up with ethics being a concern, inbickering, and other assorted behaviour that governs how anything we do is developed, in a communist environment the outcome of the goal takes presidence over whatever actions are taken. In a sense it's possible that the pursuit of an AI strategy or goal would cut corners at the cost of human life to get there first. (It's abit of a space race)

Crippling development in the west for litigation or fear mongery isn't going to halt AI progress overall as those that are already commited to unethical practices will still be developing (likely at an exponential pace)
Reply
#5
C C Offline
Three ways AI chatbots are a security disaster
https://www.technologyreview.com/2023/04...-disaster/

INTRO: AI language models are the shiniest, most exciting thing in tech right now. But they’re poised to create a major new problem: they are ridiculously easy to misuse and to deploy as powerful phishing or scamming tools. No programming skills are needed. What’s worse is that there is no known fix.

Tech companies are racing to embed these models into tons of products to help people do everything from book trips to organize their calendars to take notes in meetings.

But the way these products work—receiving instructions from users and then scouring the internet for answers—creates a ton of new risks. With AI, they could be used for all sorts of malicious tasks, including leaking people’s private information and helping criminals phish, spam, and scam people. Experts warn we are heading toward a security and privacy “disaster.”

Here are three ways that AI language models are open to abuse... (MORE - details)
Reply
Reply
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Approach to demystify black box AI not ready for prime time C C 0 140 Oct 11, 2022 05:54 PM
Last Post: C C
  What Would AI Have to Worry About? Zinjanthropos 8 745 Jun 24, 2019 09:06 PM
Last Post: Zinjanthropos
  An ant colony has memories that its individual members don’t have (hive intelligence) C C 0 267 Dec 12, 2018 07:25 PM
Last Post: C C
  New Theory Cracks Open the Black Box of Deep Learning C C 0 322 Sep 22, 2017 10:38 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)