Group-IB, a Singapore-based global cybersecurity firm, has identified an alarming trend in the illegal trade of compromised credentials for OpenAI’s ChatGPT on dark web marketplaces. The company found more than 100,000 malware-infected devices with stored ChatGPT credentials in the past year.
The Asia-Pacific region reportedly saw the highest concentration of stolen ChatGPT accounts, accounting for more than 40 percent of cases. According to Group-IB, the cybercrime was committed by malicious actors using Raccoon Infostealer, a type of malware that collects stored information from infected computers.
ChatGPT and a need for cybersecurity
Earlier in June 2023, OpenAI, the developer of ChatGPT, pledged $1 million for AI cybersecurity initiatives following an unsealed Justice Department indictment against 26-year-old Ukrainian citizen Mark Sokolovsky over his alleged involvement with Raccoon Infostealer. From there, awareness of Infostealer’s effects has continued to spread.
Specifically, this type of malware collects a wide variety of personal data, from login credentials stored in the browser, bank card details, and crypto wallet information, to browsing history and cookies. Once collected, the data is forwarded to the malware operator. Infostealers usually spread through phishing emails and are alarmingly effective due to their simplicity.
Over the past year, ChatGPT has emerged as a considerably powerful and influential tool, especially among those within the blockchain industry and Web3. It has been used in the metaverse for a variety of purposes, such as making a $50 million meme coin for example. While the now-iconic arrival of OpenAI has taken the tech world by storm, it has also become a lucrative target for cybercriminals.
Group-IB recognizes this growing cyber risk and advises ChatGPT users to strengthen their account security by regularly updating passwords and enabling two-factor authentication (2FA). These measures have become increasingly popular as cybercrime continues to increase and simply require users to enter an additional verification code in addition to their password to access their accounts.
“Many enterprises integrate ChatGPT into their operational flow. Employees enter classified correspondence or use the bot to optimize proprietary code,” Group-IB Head of Threat Intelligence Dmitry Shestakov said in a press release. “Since ChatGPT’s default configuration keeps all conversations, this could inadvertently provide a wealth of sensitive information to attackers if they obtain account credentials.”
Shestakov further noted that his team is constantly monitoring underground communities to quickly identify hacks and leaks to mitigate cyber risk before further damage occurs. Nevertheless, regular security awareness training and vigilance against phishing attempts are still recommended as additional protective measures.
The evolving cyber threat landscape underscores the importance of proactive and comprehensive cybersecurity measures. From ethical questions to questionable Web3 integrations, as the use of AI-powered tools like ChatGPT continues to grow, so does the need to secure these technologies against potential cyberthreats.
Editor’s Note: This article was written by an nft now contributor in collaboration with OpenAI’s GPT-4.