Samsung Bans The Use Of AI Tools Following A Data Leak From ChatGPT

Due to the possibility of data breaches, Samsung has prohibited employees from using AI tools like ChatGPT, Google Bard, and Bing. The organisation learned that one of its employees had uploaded confidential code to the system, which is difficult to extract and remove and might be seen by other users. According to a Samsung survey on AI products, 65% of participants think there are security dangers. A new security policy for employing generative AI was required as a result of the company's April accidentally disclosing internal source code.


The use of AI tools including OpenAI's ChatGPT, Google Bard, and Bing has been prohibited at Samsung. After learning that personnel had posted sensitive code to the platform, the ban was made public. Through a memo reviewed by Bloomberg, the company informed staff members of one of its largest divisions that a new policy had been implemented.
This organisation is concerned that it will be difficult to access and remove the data sent to AI systems. Other users might get access to this information. In a poll the company also performed on the usage of AI tools at work, 65% of participants acknowledged that there is a security risk. Employees at Samsung unintentionally posted internal source code to ChatGPT in the month of April.
The memo claims that the unintentional release of internal source code by Samsung engineers, who posted it to ChatGPT, led to the creation of the new policy.
According to the information, "HQ is reviewing security measures to create a secure environment for safely using generative AI to improve employees' productivity and efficiency." However, "we are temporarily limiting the use of generative AI until these measures are prepared."

Post a Comment

0 Comments