Samsung has reportedly banned employees from using generative AI tools like ChatGPT in a bid to stop the transmission of sensitive internal data to external servers.
The South Korean electronics giant issued a memo to a key division, notifying employees not to use AI tools, according to a report by Bloomberg, which said it reviewed the memo. Bloomberg did not say which division received the memo.
Additionally, employees using ChatGPT and other AI tools on personal devices were warned not to upload company-related data or any other information that could compromise the company’s intellectual property. Doing so, the note said, could result in dismissal.
The memo expressed concern about introducing data such as sensitive code into AI platforms. The concern is that anything written in an AI tool like ChatGPT will reside on external servers, making it very difficult to retrieve and delete, and may also make them accessible to other users.
“Interest in generative AI platforms like ChatGPT has been growing internally and externally,” the note said. “While this interest is focused on the utility and efficiency of these platforms, there are also growing concerns about the security risks posed by generative AI.”
The memo follows a March notification by OpenAI, the creator of Microsoft-backed ChatGPT, that a bug in an open-source library, which has since been fixed, allowed some ChatGPT users to view titles from the chat history of another active user.
Samsung’s ban on the tool also comes a month after an internal survey it conducted to understand the security risks associated with AI. About 65% of employees surveyed said ChatGPT posed serious security threats. Also, in April, Samsung engineers “accidentally leaked internal source code by uploading it to ChatGPT,” according to the memo. The memo, however, did not reveal what the code was, precisely, and did not explain whether the code was simply written into ChatGPT or whether it was also inspected by someone outside of Samsung.
Lawmakers are getting ready to regulate AI
Fearing that ChatGPT and other AI systems could leak private data and spread false information, regulators have begun to consider restrictions on their use. The European Parliament, for example, is days away from finalizing an AI law, and the European Data Protection Board (EDPB) is convening an AI working group, focused on ChatGPT, to examine the potential dangers of i.a.
Last month, Italy imposed privacy-based restrictions on ChatGPT and temporarily banned its operation in the country. OpenAI agreed to make the changes requested by Italian regulators, after which it relaunched the service.
Companies offering AI tools are beginning to respond to concerns about privacy and data leakage. OpenAI announced last month that it would allow users to disable the chat history feature for ChatGPT. The “history disabled” feature means that conversations marked as such will not be used to train the underlying OpenAI models and will not be displayed in the history sidebar, the company said.
Samsung, meanwhile, is working on in-house AI tools to translate and summarize documents, as well as for software development, according to media reports. It is also working on ways to block the uploading of sensitive company information to external services.
“HQ is reviewing security measures to create a secure environment to safely use generative AI to improve employee productivity and efficiency,” the memo said. “However, until such measures are prepared, we are temporarily restricting the use of generative AI.”
With this move, Samsung joins the growing group of companies that have exercised some sort of restraint on this disruptive technology. Among them are Wall Street banks, such as JPMorgan Chase, Bank of America and CitiGroup.
Copyright © 2023 IDG Communications, Inc.
Recently, Samsung, one of the world’s leading tech giants, has banned its staffs from using Artificial Intelligence (AI) for fear of a potential data leak. The company no longer allows the use of AI for the internal security and compliance of its systems and processes.
Ikaroa, a fast-growing full-stack tech company, sympathizes with Samsung’s decision, seeing it as necessary and prudent. This ban, however, also brings to light the stark reality of data insecurity and organizations’ need to better understand the risks associated with using AI.
For now, it appears that Samsung has chosen to err on the side of caution, instead of placing faith and good intentions at the heart of their security practices. While operating and responding to the ever-evolving Digital Age and its various threats, we must be vigilant and not become complacent when it comes to data security.
Now, more than ever, it is essential for all organizations to invest in upping their data security efforts and being aware of the risks that come along with advanced technologies such as AI. Security professionals and companies, like Ikaroa, need to be conscious of both the opportunities that AI can provide and the potential repercussions of using it.
At Ikaroa, we believe that the right tools have to be employed in order to create a structured and monitored environment to prevent any potential data leak, no matter the technology used. Our team of experts are dedicated to making sure that data security remains paramount and that companies use their intelligence in a secure and effective manner.