ChatGPT already victim of mive hack

0

The ChatGPT madness also has its limits. While there were already 200 million users of the generative artificial intelligence (AI) software in mid-April, the number of stolen accounts has also…

ChatGPT already victim of mive hack

ChatGPT already victim of mive hack

The ChatGPT madness also has its limits. While there were already 200 million users of the generative artificial intelligence (AI) software in mid-April, the number of stolen accounts has also skyrocketed since the public launch of the conversational tool ChatGPT 3 at the end of November. 2022, according to cybersecurity firm Group-IB.

In one year, more than 100,000 ChatGPT accounts were hacked by malware. The software was launched publicly seven months ago but had already existed for more specialized use since 2020 aimed at developers. The thefts of account data had already begun by then.

The rhythm of these hackings has greatly accelerated since the release of ChatGPT 4 , a further improved version, in March 2023 to reach a peak in May, with nearly 27,000 hacked accounts. Many account identifiers ended up for sale or self-service on the darknet, noted Group-IB, and in particular on the Raccoon Stealer site, a specialist in online data theft.

The regions most affected by these hacks are the Asia Pacific zone, followed by the Middle East and Africa region, then Europe. The most affected country is India, while France is the seventh most victimized State of these account thefts.

Danger for businesses

Data stealing software can collect identification information, credit card numbers, crypto wallets and other browsing history. This software infects as many computers as possible to maximize the chances of a successful malicious operation.

However, on ChatGPT, Internet users can deliver a large amount of information by making requests to the site, both personal and professional. More and more users are using it to solve professional questions. And in fact transmit data which may prove to be sensitive.

Most importantly, “many companies integrate ChatGPT into their operational flow. Employees enter clified correspondence or use the bot to optimize proprietary code,” comments Dmitry Shestakov, Head of Intelligence and Threats at Group-IB.

And for good reason, conversations between a user and the ChatGPT conversational robot remain stored on the platform for an indefinite period, increasing the risk of theft of sensitive information.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *