Cloud DLP in 2026: protecting data at rest in OneDrive, Google Drive, and every SaaS you forgot about
.jpg)
Ask your average knowledge worker how many files they've shared externally through their work OneDrive or Google Drive in the last two years. They don't know. They don't remember creating most of those links, because creating them took one click in a menu they use fifty times a day.
Now ask which of those links are still active, which ones point to the personal Gmail addresses of people who've left the company they used to work for, and which ones contain PII, source code, or a contract draft with unredacted financials.
That's the data-at-rest problem, and that's what a modern cloud DLP solves.
Cloud DLP vs endpoint DLP: one table

What a modern cloud DLP scan actually finds
Three buckets of findings come up in almost every scan we've run against a new tenant.
Public link exposure. Files shared “with anyone who has the link” that contain personally identifiable information, protected health data, payment card numbers, engineering secrets, or pre-announcement customer lists. Often these links were created for a single meeting three quarters ago and nobody revoked them.
External-party oversharing. Shares to personal Gmail or Yahoo accounts that belong to former employees, former contractors, or the vendor you stopped working with in 2024. The user who created the share is usually still at the company and has no memory of creating it.
Sensitive content in obvious places. Executive comp reviews in a folder that's been inherited by four different admins. A customer PII export sitting in someone's “Old Downloads” folder. The signed MSA with every figure visible, shared with a wider Group than was intended.
None of this is malicious. All of it is leaky.
Why regex DLP breaks on modern content
The classifier in classic DLP looks for patterns: 16-digit numbers with Luhn-validation, nine-digit numbers that match SSN formatting, keywords from a dictionary. That approach catches clean, structured content.
The actual content in SaaS is messier:
- Screenshots of dashboards with customer data baked into the pixels. OCR is non-trivial.
- Nested documents: a PowerPoint with an embedded Excel that has a PII tab.
- Paraphrased or unstructured: “We agreed to $4.2M over three years for 4,000 seats” in a meeting notes doc.
- Code: real API keys in YAML files, real JWTs in test fixtures, real customer IDs in bug reports.
An LLM-based classifier reads this content the way a human would. It can tell the difference between “here's an example SSN: 123-45-6789” in a training doc and a live export of 10,000 rows that just happens to include SSNs. It can catch a screenshot of a pay stub. It can flag a doc full of customer names even when no single field looks sensitive on its own.
That's what CASB Neural does. LLM-powered classification, applied continuously to OneDrive and Google Drive, tuned to PII, PCI, PHI, and intellectual property.
Continuous scanning vs scheduled scans
Here's a failure mode we see in older tools: the DLP runs on a weekly schedule. On Monday morning it reports that the tenant is clean. On Tuesday afternoon, someone sets up a public share for a pre-sales meeting. On Wednesday, the link gets forwarded. On Thursday, the meeting is over and the share is forgotten. The next Monday scan flags it. The data has been exposed for most of a week.
Continuous monitoring closes that window. New shares get inspected as they happen. Changes to permissions get re-evaluated. The time-to-detect drops from days to minutes.
For compliance programs where the auditor asks “how fast do you detect external oversharing of PHI?”, this is the difference between a passing answer and a failing one.
One-click remediation: what IT actually does all day
Detection is half the story. The other half is what happens after an alert.
Legacy workflows require IT to find the file in the admin console, figure out who owns it, email the owner, wait for a response, then manually strip the sharing permission. Multiply that by a few hundred findings and the queue becomes a full-time job.
The modern pattern is one-click remediation. From the alert itself, the admin can:
- Revoke external sharing while leaving internal collaboration intact.
- Quarantine a file (move it out of the active folder into a holding location) pending review.
- Reassign ownership from a departing employee to the right new owner.
- Notify the original sharer with a templated message.
When remediation takes seconds instead of minutes, you actually act on what you find.
What to look for when evaluating a cloud DLP
Six questions that separate modern cloud DLP from a tool that looks the part.
1. SaaS coverage. OneDrive, Google Drive at minimum. SharePoint, Box, Dropbox as you go deeper. Ask for the list and the API depth of each connection.
2. Classification engine. Regex and dictionary, or LLM-based? For LLM-based, where does classification happen and what's the privacy posture?
3. Remediation actions. What can you do from the alert? Revoke, quarantine, reown, notify, delete? Can you automate any of these by policy?
4. Analyst review loop. Can findings be triaged, labeled, and used to improve policies? Or is the tool fire-and-forget?
5. Privacy posture. Does the classifier see raw document content, and if so, what's the retention policy? For LLM-based tools, is it zero-retention?
6. Pricing model. Per-user, per-tenant, per-scan? Does cost scale painfully with data volume?
How CASB Neural does this
- CASB Neural is dope.security's cloud DLP for data at rest in SaaS.
- It scans OneDrive and Google Drive for publicly and externally shared files.
- Its classification engine is LLM-powered, tuned to PII, PCI, PHI, and intellectual property.
- It surfaces findings in the dope.console alongside SWG logs, DLP events, and Cloud Application Control policies.
- It supports one-click remediation from the findings view: revoke, reown, quarantine.
- It runs continuously, not on a weekly schedule.
It's in the same console as dope.SWG, Dopamine DLP, and CAC. One login, one policy view, one set of alerts. That's by design.
Start with one tenant
You don't need a six-month procurement to find out if this matters for your org. Pick one OneDrive or one Google Drive tenant. Run a free scan. See what comes back.
In most first scans, the finding that gets someone's attention isn't the 12,000 files flagged. It's the three or four that make the CISO lean forward and say “wait, that's shared with who?”


.jpg)


