Here’s why that’s a bad idea
Generative AI (GenAI) burst onto the scene in late 2022 and promised to help people get things done better and faster. Users could suddenly get simple explanations of complex issues in layman’s terms. They could write the perfect cover letter or find typos in correspondence. They could find a recipe for an entrée based on what was already in the kitchen. Of course they fell in love with that capability — who wouldn’t?
Then those same users went to work and, unsurprisingly, they brought the GenAI tools they liked most with them. They were suddenly able to summarize complex data easily, write copy designed to appeal to customer interests in minutes, and check or even write code.
Problems quickly arose because GenAI is inherently reactive — you ask a question, and GenAI answers you based on the huge store of knowledge in its Large Language Model (LLM). The more specific the question, the more precise the answer.
The problem is that unlike entering your pantry contents to get a recipe, submitting a copy to appeal to an existing customer entails entering details about that customer. To summarize a complex report, you must enter that report. To check code…well, you see where this is going. Complicating things still further in the enterprise, many employees who are using their personal GenAI instances are using the free tier of that tool, which shares prompts and responses with the LLM on which the interface is trained. The opportunity for data loss and leaks increases with every such user interaction.
About half (48-49%) of enterprise employees report that they have uploaded sensitive company information, such as financial, sales, or customer information, or copyrighted material, into public AI tools.1
I have a DLP solution, so I should be fine
A dedicated data loss prevention (DLP) solution is essential for today’s enterprise. These solutions, which have been used and improved over decades, detect structured data, handle predefined enforcement of policies, and provide a vital aid in regulatory compliance.
But in the world of GenAI, it is what these solutions DON’T do that is important.
Traditional DLP was not designed to handle unstructured, dynamic, or contextual data flows, which are the cornerstone of GenAI use. One example is long-form text prompts, like those that would be expected if a user was attempting to summarize content. That’s because DLP solutions were primarily built to prevent outbound exfiltration, not the “back-and-forth” seen in GenAI.
More importantly, while traditional solutions monitor email and file transfers, they are blind to the “conversational” clipboard copy-and-paste actions common when users interact with GenAI, which in the workplace, is usually via a web browser. Legacy DLP solutions cannot see browser forms or browser-based file or archive uploads at all. So, in typical user-to-GenAI interactions, when the user copies content, pastes it into GenAI, and gets a response, DLP will not see it, and cannot stop it. And, unfortunately, if the user is working in the free tier of the GenAI tool, the entire transaction is shared with the tool’s LLM and can be used for training the model, which may, in turn, expose what was learned during that interaction.
Well, my CASB DLP would catch GenAI traffic
Cloud access security brokers (CASBs) were designed to help organizations maintain visibility, enforce security policies, protect data, and defend against threats in cloud environments, including SaaS, PaaS, and IaaS. While it might seem like that would make CASBs the perfect place to provide DLP for data bound to GenAI tools, this is yet another case where traditional methods fall short.
You can think of it like trying to use a screwdriver to hammer in a nail. CASBs are good tools, but they are not the right tools for this job.
CASBs often rely on predefined app catalogs, while users access GenAI via browser sessions, extensions, APIs, or personal accounts. Like traditional DLP tools, CASBs rely on regular expressions (regex) or keyword patterns, and can’t parse the unstructured, contextual content that is the backbone of GenAI exchanges. While they may inspect files that are uploaded to an app, they might not see a file uploaded into a GenAI tool within a browser session. And most CASBs will miss GenAI responses completely.
GenAI lives in the browser. That’s the perfect place to secure it.
For a DLP solution to be effective, it must function in the proper environment. For traditional DLP tools, that environment is the endpoint, at the email gateway, or at network egress points; for CASBs, it is between the enterprise and sanctioned cloud apps. For GenAI, on the other hand, the best environment to provide DLP controls is the browser. It is vital that controls are consistent, easy to apply, and, to the greatest degree possible, inline. Controls must work seamlessly and should be customizable by users and groups.
Menlo Security makes it easy to provide exactly the controls that you need in real time and with a full understanding of browser context. Copy/paste and upload/download controls can be easily applied in addition to character limits. You can apply a host of predefined DLP rules to your content, or you can use Menlo templates to build your own.
Read the full report
Download the latest Menlo report, How AI is Shaping the Modern Workspace, and find out more about how to use GenAI safely inside the enterprise, including issues around DLP, shadow AI, and regulatory compliance. You’ll also get a comprehensive view of the risks posed by the malicious use of AI by attackers, including the rise of AI-generated phishing attacks, new malware vectors, imposter AI sites, and more.
————
1Trust, attitudes and use of artificial intelligence: a global study 2025, KPMG and the University of Melbourne.https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html