Back

ChatGPT’s Data Protection Blind Spots and How Security Teams Can Solve Them

April 20, 2023IThe Hacker NewsArtificial intelligence / Data security

In the short time since their inception, ChatGPT and other generative AI platforms have rightly earned a reputation for productivity improvements. However, the same technology that enables the rapid production of high-quality text on demand can also expose sensitive corporate data. A recent incident, in which Samsung software engineers pasted proprietary code into ChatGPT, clearly demonstrates that this tool can easily become a potential channel for data leakage. This vulnerability presents a demanding challenge for security stakeholders, as none of the existing data protection tools can guarantee that no sensitive data is exposed in ChatGPT. In this article we will explore this security challenge in detail and show how browser security solutions can provide a solution. All while allowing organizations to take full advantage of ChatGPT’s productivity potential and without having to compromise data security.

ChatGPT’s data protection blind spot: How can text input be controlled in the browser?

Whenever an employee pastes or writes text in ChatGPT, the text is no longer controlled by the company’s data protection tools and policies. It doesn’t matter if the text was copied from a traditional data file, an online document, or another source. This, in fact, is the problem. Data Leak Prevention (DLP) solutions, from on-premises agents to CASB, are all there file oriented. They apply policies to files based on their content, while preventing actions such as modifying, downloading, sharing, and more. However, this capability is of little use for ChatGPT data protection. There are no files involved in ChatGPT. Rather, usage involves pasting copied text or typing directly into a web page, which is beyond the governance and control of any existing DLP product.

How browser security solutions prevent unsafe data use in ChatGPT

LayerX launched its browser security platform for continuous monitoring, risk analysis and real-time protection of browser sessions. Presented as a browser extension, LayerX has granular visibility into all events taking place in the session. This allows LayerX to detect risky behavior and configure policies to prevent predefined actions from occurring.

In the context of protecting sensitive data from being uploaded to ChatGPT, LayerX leverages this visibility to identify attempts to insert text, such as “paste” and “type”, within the ChatGPT tab. If the text content in the “paste” event violates corporate data protection policies, LayerX will prevent the action altogether.

To enable this capability, security teams using LayerX should define the regular phrases or expressions they want to protect from exposure. They then need to create a LayerX policy that fires whenever there is a match to those strings.

See how it looks in action:

LayerX control panel
Policy settings in the LayerX control panel
ChatGPT
A user trying to copy sensitive information to ChatGPT is blocked by LayerX

Additionally, organizations that wish to prevent their employees from using ChatGPT altogether can use LayerX to block access to the ChatGPT website or any other online AI-based text generator, including ChatGPT-like browser extensions .

Learn more about LayerX ChatGPT data protection here.

Using LayerX’s browser security platform for complete SaaS protection

The difference that makes LayerX the only solution that can effectively address the ChatGPT data protection gap is its location in the browser itself, with real-time visibility and policy enforcement in the actual browser session. This approach also makes it an ideal solution for protecting against any cyber threats that target user data or activity in the browser, as is the case with SaaS applications.

Users interact with SaaS applications through their browsers. This makes it easy for LayerX to protect both the data in these applications and the applications themselves. This is achieved by applying the following types of policies on user activities throughout web sessions:

Data protection policies: In addition to the standard file-oriented protection (copy/share/download/etc. prevention), LayerX offers the same granular protection it does for ChatGPT. In fact, once the organization has defined which entries it prohibits from pasting, the same policies can be extended to avoid exposing this data to any website or SaaS.

Mitigation of account commitment: LayerX monitors the activities of each user in the organization’s SaaS applications. The platform will detect any anomalous behavior or data interaction that indicates that the user’s account has been compromised. LayerX policies will enable session termination or disable any data interaction ability for the user in the application.

Learn more about LayerX ChatGPT data protection here.

Did you find this article interesting? Follow us at Twitter and LinkedIn to read more exclusive content we publish.



Source link
Data protection concerns of all types have been brought to the fore recently with the implementation of the GDPR legislation. Amongst this there have been various blind spots providing a number of security concerns, one of which is surrounding the development of ChatGPT. Here, we look at where those vulnerabilities lie and how security teams can work to address them.

ChatGPT (short for Chat General Parameters Tool) is an AI-based conversation tool that enables users to engage in conversations in natural language with web-based chats. As such, it can be easily integrated with software solutions such as messaging platforms, website emulators, and virtual assistants.

Unfortunately, with such powerful technology, there are privacy and data concerns. Some of the most serious issues surround the way information is managed and stored. Firstly, many companies are not aware of the data generated by users engaging in conversations with ChatGPT. This includes the user’s messages, their browsing history and potential user-generated content. As such, there can be a lack of control over how user’s data is used, shared and stored.

In addition, the AI component of ChatGPT can be used to record personal information about the user which can raise further data privacy concerns. This includes data about their preferences, thoughts and conversations. This type of data is particularly valuable and may be used for nefarious purposes.

Ikaroa, a full-stack tech company, has recognised these data protection blind spots and developed a set of measures to help organisations address them. These include Data Risk Impact Analysis, data protection compliance monitoring and data protection by design principles.

The Data Risk Impact Analysis assesses the data that a company creates, collects, stores and processes; and identifies any areas of risk associated with this data. This helps security teams to understand the blind spots associated with their current practices and allows them to take steps to address any issues.

Data Protection Compliance Monitoring then ensures that the level of data protection incepted by the Data Risk Impact Analysis is maintained at all times. Additionally, Data Protection By Design and Default principles ensure that the company builds its data protection measures into the design process, understanding how and when data is used, stored and shared.

In summary, the combination of these security measures help organisations to identify any data protection blind spots associated with their use of ChatGPT, and help ensure that the data is handled in a safe and responsible way. Ikaroa provide a number of solutions to help companies protect their data when using ChatGPT. By implementing these solutions, security teams can identify, tackle and minimize the associated risks and vulnerabilities.

ikaroa
ikaroa
https://ikaroa.com

Leave a Reply

Your email address will not be published. Required fields are marked *