ChatGPT has sparked huge flames in the artificial intelligence scene so far this year and one area that is also seeing a spike in numbers is cybercrime.
Research conducted by cybersecurity firm NordVPN has found a massive seven-fold rise in hackers discussing how to manipulate the chatbot. The Dark Web has seen an increased amount of forum posts regarding the AI tool surge from 120 in January to 870 in February, a massive 625% increase.
Forum threads on ChatGPT rose by 145%, from 37 to 91 in a month. Finding a way to exploit the bot’s capabilities is a hot topic among Dark Web users and cyber criminals. Reports show that hackers are attempting to use tools like ChatGPT to create malicious code. These possibilities are opening doors for users that otherwise may be lacking the skills and are instead turning to AI to assist them.
The dark side of AI
We have already seen various concerns surrounding the advancement of AI and how it learns by scanning data on the internet. This has left many, especially creatives such as artists, worried that these systems are stealing their work. Although, as with all new technology, there is a period of change. Given the speed that big companies like Microsoft are adopting AI, we will likely see more significant regulations introduced.
The hacking community has become keen to utilise AI with titles on the Dark Web, including, How to break ChatGPT and ChatGPT as a phishing tool. These types of searches show that hackers are looking for ways to use and exploit these tools to scam or hack online users. With the backing of powerful AI it does raise concerns that these criminals could commit fraud, such as romance scams and target multiple victims simultaneously.
A cybersecurity expert at NordVPN, Marijus Briedis said “Chatbots like ChatGPT can make our lives easier in many ways, like performing mundane written tasks, summarising complex subjects or suggesting a holiday itinerary. For cybercriminals, however, the revolutionary AI can be the missing piece of the puzzle for a number of scams.”
Things to look out for
ChatGPT is one of the fastest-growing apps in the world, reaching 100 million users in just two months. Before too long, we will begin to see more instances of AI being used, for example Microsoft is incorporating artificial intelligence into its search engine Bing and Google is creating its own chatbot, Bard.
NordVPN shared some tips to help keep chatbots in check, such as avoiding sharing any personal information. AI chatbots can learn from each conversation they have. Therefore keeping questions and responses non-personal will make it less likely for sensitive data to be collected.
Artificial Intelligence can also be used for phishing attacks as hackers can now utilise these tools for more realistic scams. Often an individual can notice phishing attacks through poor grammar, spelling and sentence structure. However, with AI this may be a thing of the past, so it’s essential to check any sender’s address and look for inconsistencies in links or domain names. Finally, having an antivirus system can also be helpful to ensure that you are alerted by any suspicious files.