Samsung Prohibits its Employees from Using ChatGPT

Samsung banned its employees from using common generative AI tools, such as ChatGPT, in the workplace, after discovering that they had uploaded sensitive code to the platform.

On Monday, the South Korean tech giant notified employees of one of its largest divisions of the new policy in a memo.

According to the memo, the company is concerned that data sent to generative artificial intelligence platforms, including: (Cool) from Google and (Bing) from Microsoft are stored on external servers, which makes it difficult to retrieve and delete them, and may end up revealing them to other users.

-The company conducted a survey last month on the use of AI tools internally, and said that 65 percent of respondents believe such services pose a security risk.

According to the note, Samsung engineers leaked, earlier in April; By mistake, internal source code by uploading it to gpt chat.

“The interest in generative AI platforms, such as (GPT Chat) is growing both internally and externally,” Samsung told employees. “While this focus is on the usefulness and efficiency of these platforms, there are also growing concerns about the security risks presented by generative AI,” she added.

Samsung is the latest major company to express concern about the technology. In February, just two months after OpenAI’s chatbot sparked a storm of interest in the technology, it banned some Wall Street banks such as JPMorgan Chase. ) JPMorgan Chase, Bank of America, and (Citigroup) its use and restriction. Italy banned the use of GBT Chat due to privacy concerns, then reversed its stance in recent days.

Samsung’s new rules prohibit the use of generative AI systems on the company’s personal computers, tablets or smartphones, as well as on its internal networks. The rules do not apply to the company’s devices sold to consumers, such as mobile devices running the Android operating system from Google, or personal computers running Windows from Microsoft.

Samsung asked employees who use GBT chat and other tools on personal devices not to send any company-related information or personal data that could reveal its intellectual property. She warned that Samsung said in the memo: “We ask you to strictly adhere to our security guidelines, and failure to do so may result in company information being hacked or stolen, which will lead to disciplinary action up to and including termination of employment.”

Meanwhile, the company is creating its own AI tools for document translation and abstraction as well as for software development. It is also working on ways to prevent sensitive company information from being uploaded to external services.

And last month, the chatbot, ChatGBT, added a “stealth” mode that allows users to block their conversations from being used to train an AI model.

Similar to the version for Android and (iOS), the (new Bing) chatbot has become integrated into (Swiftkey) in three main ways: (Search), (Chat) Chat, and (method) Tone. After the update, you will see a (ping) icon above the keyboard, and from there you can click on one of the three ways to interact with the bot.

With the (chat) function, the search engine (the new Bing) can be accessed directly for detailed queries, and with the (style) function, it is possible to communicate in a more effective way, using artificial intelligence to customize the text to suit any situation. This function can be used when writing official emails, or while learning a new language.

The company believes that the (style) feature provides users with styles to make their words sound more professional, casual, polite, or concise for a social post.

And with the (search) function, the web can be searched quickly from the keyboard, without having to switch applications. This function can help while talking to a friend, to search for relevant information, such as: weather, restaurants near you, or stock prices.

Reviewer overview

Samsung Prohibits its Employees from Using ChatGPT - /10

Summary

Samsung banned its employees from using common generative AI tools, such as ChatGPT, in the workplace, after discovering that they had uploaded sensitive code to the platform.

0 Bad!