Endpoint DLP: how data-in-motion protection actually works

Endpoint DLP: how data-in-motion protection actually works

Your DLP probably protects data at rest. Great. Now tell me what stops an engineer from pasting a block of production source code into ChatGPT right now.

That's the data-in-motion problem. It's the part most DLP programs still don't solve well, and it's where endpoint DLP earns its budget line.

This post walks through what endpoint DLP actually does in 2026, how "data at rest" and "data in motion" are different problems with different tools, and what a modern endpoint DLP solution needs to look like if you're evaluating one today.

Data at rest vs data in motion: a clean definition

Two concepts, frequently confused:

Data at rest. Files sitting in storage. SharePoint, OneDrive, Google Drive, S3 buckets, the marketing team's Dropbox account. The risk is that a file is exposed to the wrong people, through an overly permissive share link or a misconfigured folder. Scanning and classifying data at rest is the job of a cloud DLP or CASB.

Data in motion. Bytes moving off the device, right now. An upload to a file-sharing site. A prompt typed into a chatbot. A screenshot pasted into a Slack DM with an outside contractor. The risk is that sensitive content leaves the company before anyone sees it. Stopping data in motion is the job of endpoint DLP.

Both matter. They protect different things. A good security program runs both.

If you only run cloud DLP, you catch leaks after the fact, and only in the SaaS tools you've actually hooked up. If you only run endpoint DLP, your OneDrive is full of PII that was shared externally six months ago. Treat them as a pair.

What endpoint DLP has to do in 2026

The category has moved. Here's what the bar looks like now.

Intercept HTTPS traffic at the edge of the device. Almost every interesting egress path is encrypted: browser uploads, SaaS app APIs, AI chatbots. If your endpoint DLP can't inspect HTTPS, it can't see what's leaving. That means on-device SSL inspection has to be part of the package.

Classify content in milliseconds. The decision has to happen before the Send button completes. That rules out sending every file to a cloud sandbox and waiting. Classification either runs locally or hits an API with a tight SLA.

Handle unstructured, free-text content. A 2015-era DLP matched regex patterns against known field types. Credit card numbers, Social Security numbers, maybe a few keywords. That approach breaks on AI prompts, where an employee might paraphrase the sensitive thing, include it in a code snippet, or drop it as a screenshot. Modern classification uses language models to read content the way a human reads content.

Make an explainable decision. A block with no reason creates a helpdesk ticket in ten minutes. "Your upload was blocked because it contained what looks like customer PHI. Here's the specific portion" creates an informed user who stops doing the risky thing.

Run lightweight on the endpoint. If your agent eats 2 GB of memory and slows Chrome to a crawl, users will find a way to disable it. The footprint is a feature.

Why on-device beats cloud-proxy DLP

Two broad architectures exist. Legacy cloud proxy DLP routes every byte of user traffic through a data center somewhere. The proxy inspects, classifies, and forwards. On-device DLP does the work where the traffic originates, on the endpoint itself.

The tradeoffs:

Latency. A cloud proxy adds a round trip to someone else's data center. On-device DLP inspects in-place. For users on slow networks or far from the nearest proxy PoP, the difference is measurable.

Privacy and data residency. Cloud proxy DLP means all of your employees' traffic, including the personal banking sessions and the doctor's appointment bookings, passes through a third-party infrastructure. On-device DLP keeps the inspection local. Traffic flies direct.

Blast radius. If the cloud proxy provider has an outage, your workforce loses internet. On-device agents fall back to cached policies and keep working.

Geography. Cloud proxies route through specific regions. For employees in China, the backhaul goes through the Great Firewall, which creates performance problems and sometimes outright failures. On-device DLP doesn't have that dependency.

Dopamine DLP is dope.security's on-device endpoint DLP. It runs on dope.endpoint, which is under 100 MB of RAM and benchmarks at roughly 4x the performance of legacy proxy SWGs.

The AI prompt edge case

This is the use case that breaks old-school DLP.

An engineer opens Claude or ChatGPT and types: "Here's a production config with embedded secrets. Help me refactor it." The text of the prompt is the leak. There's no file attachment to scan. There's no structured field. The content is unstructured English (plus YAML, plus the API keys).

Regex-and-dictionary DLP is useless here. The secrets don't match a known pattern because the engineer paraphrased or compressed them. The classifier has to understand the content semantically.

Dopamine DLP intercepts the prompt before it leaves the device, classifies it against zero-retention OpenAI APIs, and makes a Block, Monitor, or Off decision per policy. The Dopamine Summary field then explains to the user, in plain English, what the classifier saw and why.

The zero-retention point matters. Employee prompts don't get used to train any model. The classification request goes out, the label comes back, nothing persists.

The Buyer’s checklist

If you’re evaluating endpoint DLP solutions in 2026, here’s what to ask for in a demo.

•         Can the agent inspect HTTPS traffic on-device, including modern QUIC-based protocols and pinned-cert apps?

•         How does the classifier handle free-text AI prompts? Ask for a live test.

•         What’s the memory and CPU footprint at steadystate, and under load (a 10 GB file transfer)?

•         What’s the policy update latency? Minutes?Seconds?

•         What happens when the cloud console is unreachable? Does the agent fail open, fail closed, or fall back to cached policy?

•         Can the block reason be shown to the user without creating a helpdesk ticket?

•         What OS and platforms are covered, and are the agents native (not repackaged Electron wrappers)?

•         What’s the deployment model? MDM push? OIDC SSO for admins? How long to stand up a pilot?

Want to see it in action?

Book a 20-minute Dopamine DLP demo, or spin up a trial tenant from dope.console and run the AI-prompt test with your own content.

Data Loss Prevention
Data Loss Prevention
back to blog Home