Connect with us

Hi, what are you looking for?

Bytes

Hackers Use ChatGPT to Write Malware

With every technological advancement comes criminal attempts to use it for nefarious purposes and unfortunately, the advent of AI tools like ChatGPT is no exception. Cybercrooks are already using ChatGPT to create malicious code effortlessly. This means even criminals without coding skills can take advantage of online consumers via malware.

In November, infosec outfit Check Point Research discovered that cybercriminals are using large language models such as ChatGPT to create malicious code, after looking into several underground cybercrime websites.

“As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all,” writes Check Point Research. “Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

While at present, ChatGPT tends to produce buggy code, its ability to write code will only improve with time. In December, the Finnish government warned that AI tools may lead to an exponential rise in cyberattacks within five years.

Artists and Academics Against AI

While AI tools like ChatGPT certainly have a large fanbase, many others are wary of such technologies. Recently, the International Conference on Machine Learning banned using ChatGPT in writing academic papers. Likewise, artists around the globe are protesting the use of AI tools like Midjourney, which create AI-generated art.

Check Point Research showcased how cybercriminals can use ChatGPT by using it to build backdoor malware capable of dynamically running scripts. Additionally. The organization explained how hackers can use the LLM to create entire infection flows.

Researchers asked ChatGPT to elaborate on how it could be used for criminal purposes. The chatbot responded that it could create phishing emails and posts on social media that trick users into clicking unsavory links or giving away personal information. However, the LLM also stated that OpenAI is not responsible for its misuse.

“It is important to note that OpenAI itself is not responsible for any abuse of its technology by third parties,” said CHatGPT. “The company takes steps to prevent its technology from being used for malicious purposes, such as requiring users to agree to terms of service that prohibit the use of its technology for illegal or harmful purposes.”

Written By

Jack Brassell is a freelance journalist and aspiring novelist. Jack is a self-proclaimed nerd with a lifelong passion for storytelling. As an author, Jack writes mostly horror and young adult fantasy. Also an avid gamer, she works as the lead news editor at Hardcore Droid. When she isn't writing or playing games, she can often be found binge-watching Parks & Rec or The Office, proudly considering herself to be a cross between Leslie Knope and Pam Beasley.

You May Also Like

Level Up

Eager to be at the metaverse frontier, but not sure how to get started? As exciting as the idea of a shared digital space...

Bytes

New blockchain gaming platform based on Unreal Engine 5.

Bytes

The record for the most expensive land sale in the metaverse has just been raised

Bytes

Voice suppression tech prevents the real world from overhearing your in-metaverse conversations

Advertisement
Advertisement

Subscribe to the future

Advertisement