
https://www.technologyreview.com/2024/05...-using-ai/
INTRO: Artificial intelligence has brought a big boost in productivity—to the criminal underworld.
Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.
Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”
Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably.
That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. Here are five ways criminals are using AI now... (MORE - details)
COVERED:
Phishing
Language models allow scammers to generate messages that sound like something a native speaker would have written.
Deepfake audio scams
While deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes.
Bypassing identity checks
They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera.
Jailbreak-as-a-service
Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.
Doxxing and surveillance
As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. The more information there is about a person on the internet, the more vulnerable they are to being identified.
INTRO: Artificial intelligence has brought a big boost in productivity—to the criminal underworld.
Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro.
Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”
Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably.
That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. Here are five ways criminals are using AI now... (MORE - details)
COVERED:
Phishing
Language models allow scammers to generate messages that sound like something a native speaker would have written.
Deepfake audio scams
While deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes.
Bypassing identity checks
They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera.
Jailbreak-as-a-service
Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.
Doxxing and surveillance
As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. The more information there is about a person on the internet, the more vulnerable they are to being identified.