In a shocking revelation, cyber criminals are using Microsoft-owned ChatGPT, an artificial intelligence platform, to create Telegram bots that can write malware and steal personal data. The latest research by CheckPoint Research (CPR) highlights the potential dangers of using AI technology and how cyber criminals are finding ways to exploit its potential.
According to the research, hackers are finding ways to bypass ChatGPT’s limitations and are using OpenAI API to generate malicious content such as phishing emails and malware. The process is carried out by creating Telegram bots that use the API, which are then advertised in hacking forums to increase their reach.
CPR had earlier discovered that cyber criminals were using ChatGPT to improve the coding of basic Infostealer malware from 2019. The current version of OpenAI’s API is accessible by external applications and has very few anti-abuse measures in place, making it easier for cyber criminals to create malicious content.
The researchers discovered a cyber criminal who was advertising a new service in an underground forum, a Telegram bot that uses OpenAI API without any limitations or restrictions. The criminal had created a basic script that uses the API to bypass anti-abuse restrictions.
The use of ChatGPT for malicious purposes is not a new concept, with discussions and research on how cyber criminals are leveraging the OpenAI platform and specifically ChatGPT to generate malicious content. The researchers have also witnessed attempts by Russian cyber criminals to bypass OpenAI’s restrictions and use ChatGPT for malicious purposes.
Cyber criminals are increasingly interested in ChatGPT because the AI technology behind it can make hacking more cost-efficient and sophisticated. The latest research by CPR highlights the need for better anti-abuse measures and the importance of being aware of the potential dangers of using AI technology.
The use of AI technology, specifically ChatGPT, by cyber criminals to generate malicious content is a growing concern and highlights the need for better security measures. Users are advised to exercise caution when using AI technology and be vigilant of any suspicious activities or emails.