With every technological advancement comes criminal attempts to use it for nefarious purposes and unfortunately, the advent of AI tools like ChatGPT is no exception. Cybercrooks are already using ChatGPT to create malicious code effortlessly. This means even criminals without coding skills can take advantage of online consumers via malware.
In November, infosec outfit Check Point Research discovered that cybercriminals are using large language models such as ChatGPT to create malicious code, after looking into several underground cybercrime websites.
“As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all,” writes Check Point Research. “Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”
While at present, ChatGPT tends to produce buggy code, its ability to write code will only improve with time. In December, the Finnish government warned that AI tools may lead to an exponential rise in cyberattacks within five years.
Artists and Academics Against AI
While AI tools like ChatGPT certainly have a large fanbase, many others are wary of such technologies. Recently, the International Conference on Machine Learning banned using ChatGPT in writing academic papers. Likewise, artists around the globe are protesting the use of AI tools like Midjourney, which create AI-generated art.
Check Point Research showcased how cybercriminals can use ChatGPT by using it to build backdoor malware capable of dynamically running scripts. Additionally. The organization explained how hackers can use the LLM to create entire infection flows.
Researchers asked ChatGPT to elaborate on how it could be used for criminal purposes. The chatbot responded that it could create phishing emails and posts on social media that trick users into clicking unsavory links or giving away personal information. However, the LLM also stated that OpenAI is not responsible for its misuse.
“It is important to note that OpenAI itself is not responsible for any abuse of its technology by third parties,” said CHatGPT. “The company takes steps to prevent its technology from being used for malicious purposes, such as requiring users to agree to terms of service that prohibit the use of its technology for illegal or harmful purposes.”