Russian cybercriminals have been spotted trying to circumvent restrictions on ChatGPT and use advanced artificial intelligence chatbots for their nefarious purposes.
Check Point Research (CPR) said they found multiple discussions on underground forums where hackers discussed various methods, including using stolen payment cards to pay for upgraded user accounts on OpenAI, bypassing geofence restrictions, and using “Russian semi-legal online SMS service” registered ChatGPT.
ChatGPT is a new artificial intelligence (AI) chatbot that is making headlines due to its versatility and ease of use. Cybersecurity researchers have seen hackers use the tool to generate believable phishing emails, as well as code for malicious, macro-laden Office files.
However, it is not easy to abuse the tool due to the many limitations set by OpenAI. With their intrusion into Ukraine, Russian hackers have more hurdles to overcome.
For Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software Technologies, the barriers weren’t good enough:
“It is not very difficult to bypass OpenAI’s restrictions on country-specific access to ChatGPT. Now, we are seeing Russian hackers already discussing and examining how to bypass geo-fences to use ChatGPT for their malicious purposes.
We believe that these hackers are likely to try to implement and test ChatGPT in their daily criminal activities. Cybercriminals are increasingly interested in ChatGPT because the AI technology behind it can make hackers more profitable,” Shykevich said.
But hackers don’t just want to use ChatGPT – they’re also trying to exploit the tool’s growing popularity to spread all kinds of malware (opens in a new tab) and steal money. For example, Apple’s mobile app repository, the App Store, hosts an app that pretends to be a chatbot, but costs about $10 a month for a subscription. Other apps (some of which are also available on Google Play) charge as much as $15 for the Service.