It has already become clear that artificial intelligence is here to stay and make some tasks, especially work tasks, ‘easier’. However, as I would say Rupeltinsky “all magic comes with a price” (All Magic comes with a price), AI must be treated with care. now some Samsung employees mistakenly leak confidential code in ChatGPT.
Although there is already countries that prohibit the use of ChatGPT, due to problems with the handling and collection of data. This time it was not thanks to the AI of OpenAI, but rather a human oversight.
The Samsung company allowed its workers to use the digital tool to have better productivity. However, things got out of hand on three different occasions in a span of 20 days.
According to the media economistthe three employees used the chatbot to get support and fix bugs, but inadvertently provided the tool with sensitive process and performance information, leading to a data leak.
The three incidents: Samsung employees mistakenly leak code
According to the report, the first of these was when an employee sent a proprietary program’s mother code to ChatGPT. Because? He was looking for AI to help him correct mistakes.
In the second, the worker entered test patterns that Samsung used to identify faulty chips. in order for the AI to optimize the method. However, these tests are strictly confidential, because it allows companies to optimize production methods, reduce problems and speed up verification processes, which lowers costs.
The last and third incident was by an employee, who used the app Naver Clova, to convert a recording to text. But, the problem was that the audio was from a business meeting, which was later asked ChatGPT to help him generate a presentation with the obtained document.
Why is it important not to disclose data to ChatGPT?
Not only is it not known for sure where all that information entered into the chatbot is stored, but OpenAI employees verify the content of the questions that are used in ChatGPT. So, they leverage that data as AI learning. So, as in the case of Samsung, corporate secrets embedded in queries, there is a great potential for them to be leaked to other users.
In addition, by OpenAI, there is a warning to prevent this type of situation that says “Do not enter confidential information”.
what samsung says
After the company became aware of the actions of the employees, they warned of the risks involved in using ChatGPT. According to the same medium, Samsung reported that the data entered in the chatbot is transmitted and stored on external OpenAI servers. Therefore, this makes it impossible for them to recover.
Finally, the company is evaluating blocking access to ChatGPT to prevent new incidents. Also, some reports say that Samsung is developing its own AI bot. But none of the data could be confirmed.
Editorial Team The editorial team of EMPRENDEDOR.com, which for more than 27 years has worked to promote entrepreneurship.