Article  Five ways criminals are using AI

#1
C C Offline
https://www.technologyreview.com/2024/05...-using-ai/

INTRO: Artificial intelligence has brought a big boost in productivity—to the criminal underworld.

Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.

Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”

Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably.

That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. Here are five ways criminals are using AI now... (MORE - details)

COVERED:

Phishing

Language models allow scammers to generate messages that sound like something a native speaker would have written.

Deepfake audio scams

While deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes.

Bypassing identity checks

They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera.

Jailbreak-as-a-service

Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.

Doxxing and surveillance

As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. The more information there is about a person on the internet, the more vulnerable they are to being identified.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article All the ways AI could suck in 2024 C C 3 576 Jan 10, 2024 01:26 AM
Last Post: stryder
  Article The surprisingly subtle ways Microsoft Word has changed how we use language C C 0 293 Nov 2, 2023 01:36 AM
Last Post: C C
  10 ways to apply machine learning in Earth & space sciences + Edge of chaos C C 0 348 Jun 29, 2021 11:08 PM
Last Post: C C
  36 ways the WWW has changed us Magical Realist 1 960 Apr 13, 2016 01:50 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)