Your Employees Are Uploading Sensitive Files to AI. Here's How to Stop It.
77% of employees have leaked sensitive data through AI tools like ChatGPT.
That number comes from a 2025 report by eSecurity Planet. And it's not a story about bad actors. It's a story about normal employees trying to move faster, summarizing a patient record, asking for help with a contract, uploading a financial model, without thinking about what they just handed to a third-party AI.
Your security policy didn't stop them. Your training didn't stop them. And if you're relying on a network-level control, your DLP didn't stop them either.
The Problem Is Bigger Than You Think
Here's what the data actually looks like in 2025:
- 22% of files employees upload to generative AI tools contain sensitive information, PII, PHI, or PCI data
- 47% of employees using AI tools at work are doing it through personal, unmanaged accounts
- 83% of organizations have no technical controls in place to detect or block sensitive uploads to AI platforms
- Breaches involving shadow AI cost an average of $4.63 million, $670,000 more than standard breaches, per IBM's 2025 Cost of a Data Breach Report
The threat isn't hypothetical. Samsung engineers pasted proprietary source code into ChatGPT. The interim chief of CISA uploaded sensitive government documents to a public ChatGPT account. These weren't careless people, they were people using the fastest tool available.
This is the shadow AI problem. And it's playing out across your workforce right now.
Why ChatGPT, Claude, and Gemini Are Your Biggest DLP Blind Spots
Every AI tool your employees use has a file upload feature. That's the problem.
ChatGPT
ChatGPT supports file uploads in both free and paid tiers. Employees can upload PDFs, spreadsheets, Word documents, and more. Free-tier users are submitting that data to a model that OpenAI may use for training. Even on ChatGPT Enterprise, which doesn't train on your data, every uploaded file is processed on OpenAI's servers. If a file contains PII, PHI, or PCI data, it's left your environment.
ChatGPT is also the most widely used AI tool in the enterprise, which makes it the highest-volume leak vector. Personal ChatGPT accounts, the ones employees create themselves, outside any company agreement, are especially dangerous because you have zero visibility, zero data agreement, and zero recourse.
Claude (Web App and Desktop)
Claude has become a go-to tool for long-document analysis, summarization, and coding assistance. Both the web app at claude.ai and the Claude desktop application accept file uploads, PDFs, images, text files, code files, spreadsheets.
The desktop app adds a new dimension to the risk. It's not a browser tab you can block with a URL filter. It's a native application running on the device, capable of accessing local files directly. Traditional network-level DLP tools never see what a desktop app sends. If an employee drags a client contract into Claude's desktop interface, most security stacks have no visibility into that action.
Enterprise Claude plans include data retention controls, but only for accounts your company manages. Employees on personal Claude Pro or free accounts, using their personal email, are entirely outside your governance boundary.
Gemini
Google's Gemini, both the consumer web app and Gemini Advanced, supports document uploads and is deeply integrated into Google Workspace. For employees who already live in Google Docs, Sheets, and Drive, the path from a sensitive document to a Gemini prompt is one click.
Gemini Advanced users can upload files directly from Drive. That means sensitive data already inside your Google environment can be piped into an AI model without leaving the Google interface, making it invisible to perimeter-based security controls that don't understand what's happening at the application layer.
How Dopamine DLP Handles This
Dopamine DLP is dope.security's AI-powered endpoint DLP. It runs as an agent on every managed device, the same lightweight agent that powers the dope.security SWG. No extra deployment. No new endpoint tool to manage.
Because Dopamine DLP works at the device level, it has visibility that network-based tools don't.
What Dopamine DLP Monitors
File uploads to AI tools. When an employee attempts to upload a file to ChatGPT, Claude (web or desktop), or Gemini, Dopamine DLP intercepts the action before the file leaves the device. It inspects the content, not just the filename, and evaluates it against your data policies.
Clipboard activity. When sensitive data is copied and pasted into an AI prompt, Dopamine DLP catches it at the clipboard level. Copy-pasting a block of patient records into Claude's web app triggers the same policy evaluation as a file upload.
AI desktop applications. The Claude desktop app, the ChatGPT desktop app, these are native applications, not browser tabs. Dopamine DLP's endpoint agent has visibility into the data they send because it operates at the OS level, not the network level.
What Dopamine DLP Detects
Dopamine DLP identifies the data types that matter most for compliance:
PII (Personally Identifiable Information), names, email addresses, Social Security numbers, driver's license numbers, passport numbers, and combinations of identifiers that constitute PII under GDPR, CCPA, and other privacy regulations.
PHI (Protected Health Information), patient names, dates of service, diagnoses, treatment records, insurance information, and any data covered under HIPAA. For healthcare organizations, this is the category that triggers the largest regulatory exposure.
PCI Data (Payment Card Information), credit card numbers, CVV codes, bank account numbers, and payment records subject to PCI-DSS. Even a partial card number in a document is a detection event.
Source code and intellectual property, proprietary code, internal documentation, and business strategy documents. Samsung's ChatGPT incident involved source code, not regulated data, but the business impact was the same.
Credentials and secrets, API keys, passwords, tokens, and authentication strings. Developers frequently paste these into AI coding assistants for help debugging. Dopamine DLP catches them before they reach the model.
What Dopamine DLP Does When It Finds Something
You configure the response. The options are:
Block, The upload or paste is stopped before it reaches the AI tool. The employee sees a notification explaining why.
Warn, The employee is alerted that the content may contain sensitive data, but the action is not blocked. Useful for lower-sensitivity categories where awareness is sufficient.
Log and alert, The action completes, but it's logged and your security team is notified. Useful for establishing a baseline before moving to active blocking.
Most organizations start with logging and alerting to understand their exposure, then graduate to blocking for the highest-risk categories.
Setting It Up
Deploying Dopamine DLP on your existing dope.security fleet takes under 10 minutes via Jamf or Intune. For new deployments, the full stack, SWG plus Dopamine DLP, can be live across your entire device fleet the same day.
The default policy set covers ChatGPT, Claude (web and desktop), Gemini, and a growing library of other AI tools including Perplexity, DeepSeek, Copilot, and Grok. You can add custom applications or modify detection rules without a professional services engagement.




