Chatbots powered by AI have become an essential tool for many users, who even make models like ChatGPT either Copilot personal or sensitive queries, thinking that, since it is end-to-end encrypted, no one can read your conversations. But an investigation shows that these are not as secure as they seem and that even hackers They can access these chats with very good precision.
As has been discovered Offensive AI Labhackers can read these conversations through a side channel attack, which consists of collecting information thanks to the physical implementation of a system, including data on energy consumption, the time required by the model to process the data, or even the electromagnetic radiation that is produced during a certain period. This information is then processed through a purpose-built AI, which, on average, can decipher the conversations of 55% of the responses captured from ChatGPT or Copilot. 29% of the time, answers are revealed with complete word accuracy.
This is because language models like ChatGPT generate and send responses through a series of tokens that are transmitted from the server to the user. These are end-to-end encrypted, but attackers use the token length side channel thanks to the size of the packets, which reveal this data.
Apparently, major chatbots, such as ChatGPT or Copilot, which are end-to-end encrypted, have this drawback. Only Gemini, Google's AI, is saved of the possibility that hackers can access conversations through a side channel attack, he says Offensive AI Lab.
ChatGPT, Copilot and other AI models have the same problem
The research firm assures that there is a way to prevent hackers from reading the chats of AIs such as ChatGPT or Copilot. It involves training the model by performing raw attacks to improve its ability to infer the sequence of tokens.
The approach is to train a state-of-the-art LLM to translate symbolic-length sequences into readable sentences. Furthermore, by providing the LLM with the context of previously inferred sentences, the LLM can further narrow down the possible sentences, thus reducing the entropy involved in inferring entire paragraphs.
Offensive AI Lab.
In any case, and until OpenAI (ChatGPT) or Microsoft (Copilot) (to name some companies with artificial intelligence chatbots) do not provide a solution, this discovery represents a serious privacy problem on the part of users who use these models. Above all, if we take into account that they tend to ask questions of a personal and sensitive nature, such as doubts about terminating a pregnancy, about illnesses, etc.
At the moment, the only thing users can do is not reveal personal information to chatbots like ChatGPT or Copilot. Many of these models, in fact, launch a similar warning the first time a chat window is opened.