It all started on March 20 when users through Twitter and Reddit began to report that queries made by other users appeared in their history of questions. In some cases in other languages. As confirmed by OpenAI in a statement, all this caused the ChatGPT service to be suspended for a while until the bug was corrected and then the service was restored.
A user asks OpenAI if he has been hacked because it was showing him foreign languages in the sidebar of his chat history.In addition to the query history, some people also reported that other users’ email addresses appeared on the payment page for ChatGPT Plus:
In the case of payment data, as OpenAI explained, the ruling affected 1.2% of active subscribers to ChatGPT Plus and although the information exposed included the last four digits of credit cards, in no case were the complete numbers exposed. . The company behind ChatGPT assured that it contacted users affected by the exposure of this information and is confident that users’ personal data is no longer at risk.
OpenAI confirmed that this exposure was caused by a bug in the Redis open source library client, and that it sent a patch to the Redis maintenance team that has now fixed the bug.
In addition to the bug that allowed information to be exposed, a vulnerability was reported to OpenAI that allowed Web Cache Deception attacks to be carried out. This vulnerability, which has already been corrected, made it possible to steal third-party accounts, view the history of queries, and access account payment data.
The research team of ESET recently alerted about different scams and hoaxes that have circulated taking advantage of the success of ChatGPT. Examples included a fake Google Chrome extension called “Quick access to Chat GPT” that cybercriminals used to steal Facebook accounts, which were in turn used to create bots and deliver malvertising. However, this was not the only malicious extension that exploited the ChatGPT name, as Guardio researchers revealed a new variant of the same malicious extension that steals accounts from the social network. In this case it is a Trojanized version of a legitimate extension called “ChatGPT for Google”.
“As we can see, ChatGPT is attractive to malicious actors, either to use the tool for malicious purposes, as well as to impersonate and trick unsuspecting people. This trend will likely continue and we will continue to see cases where attempts are made to exploit vulnerabilities or perform fraud on their behalf.” adds Camilo Gutiérrez Amaya, Head of the ESET Latin America Research Laboratory.