Shadow AI: how to find (and actually govern) unapproved AI use in your org
.jpg)
Two years ago, "shadow IT" meant an employee signed up for a Dropbox account without telling you. Annoying. Slow-moving. Solvable in a quarterly audit.
Today, "shadow AI" means an employee pasted your customer database schema into a random LLM they found on Product Hunt five minutes ago. The data exfiltration happened before you finished this sentence.
Shadow AI is shadow IT on fast-forward, with a bigger blast radius. It's governable, but only if you know what you're actually looking for.
What shadow AI actually is
Shadow AI is the use of AI tools inside an organization without IT or security approval. It covers:
- Public LLM chatbots used on personal accounts (ChatGPT personal, Claude personal, Perplexity, Gemini personal).
- Enterprise-looking AI tools signed up for with personal credit cards.
- Browser extensions that add AI features to Gmail, Google Docs, Slack, or the browser itself.
- API keys generated by developers and used in side projects that touch production data.
- Free AI tools inside larger SaaS products the org already uses (Notion AI, Figma AI, Linear, GitHub Copilot on personal seats).
The pattern is always the same: an individual finds a tool that makes their job easier, starts using it, doesn't mention it, and may not even think of it as "using AI."
Why shadow AI is worse than shadow IT
Three reasons.
1. The data exposure is instant. Shadow IT usually meant a file sync that you could later find and kill. Shadow AI means the prompt, with its contents, is already on a third-party server the moment the user hits send. You can't recall a prompt.
2. The content is unstructured. Traditional data loss prevention rules look for patterns: SSNs, credit card numbers, specific document fingerprints. AI prompts blow through those because they're usually free-text English with the sensitive bit paraphrased or mixed in. "Hey, summarize this conversation with our CFO about Q3 layoffs" doesn't match any regex in your DLP.
3. The compliance exposure compounds. Personal LLM sessions often don't have audit logs, don't have data residency controls, and may be used as training data by default. For regulated industries, that's the shape of a reportable incident.
How to discover shadow AI
Start with what you already have: your Secure Web Gateway logs.
If your SWG inspects SSL on-device (as dope.SWG does), every AI interaction produces a log line with the destination, the time, the user, and the OAuth provider. From those logs, three questions become answerable:
Which AI tools does my org actually use? Sort by destination domain, filter to AI categories, and count distinct tools. Most mid-market orgs find 20 to 40 distinct AI endpoints in a month. Far more than the IT-approved list.
Who's using them? Group by user, department, and team. Patterns emerge: engineering uses Claude and Copilot, marketing uses ChatGPT and Jasper, sales uses Clay and Apollo AI tools, design uses Midjourney and Figma AI.
Corporate or personal accounts? Inspect the OAuth flow in the logs. If the sign-in destination is your company's tenant, it's governed. If it's a personal Google or OpenAI account, it's outside your control.
Three complementary sources:
- CASB logs surface AI activity inside SaaS apps you already monitor (Notion, M365, Google Workspace).
- Identity provider logs show approved SSO sessions. Everything missing from this list, but present in SWG logs, is unapproved.
- Endpoint DLP events catch the actual prompt content for anything a user tries to send.
Between those three, you have the full picture.
The top shadow AI tools in enterprises today
Based on patterns we see across customer deployments, here are the tools that show up most in first-scans:
- ChatGPT (personal accounts). Still the most common. Employees sign up with personal Google accounts, often on the corporate laptop.
- Claude (personal accounts). Growing fast, especially in engineering orgs.
- Gemini in Chrome. Enabled by default if signed into a personal Google account. Often unnoticed.
- Perplexity. Used as a research tool. Prompts often contain internal context.
- Copilot Pro (personal). Not to be confused with M365 Copilot. Developers sign up individually.
- Browser AI extensions. Readwise Ghostreader, Compose AI, ChatGPT Sidebar, Monica. Often have broad permissions.
- IDE plugins. Cursor, Tabnine, Cody, Replit AI. Some send code to third-party services by default.
- Email AI assistants. Superhuman AI, Shortwave. Read inbox content.
- Meeting AI. Fireflies, Otter, Read.ai. Often joined via personal accounts.
- Specialized tools. Midjourney, Runway, ElevenLabs. Usually lower risk but still unsanctioned.
Your list will look different. Run the discovery.
A 30-day shadow AI remediation plan
A realistic pace. Not aggressive. Get it done.
Week 1: Discover. Pull SWG, CASB, and IdP logs. Build a single inventory of AI tools, user volume, and account type (corporate vs personal). Don't block anything yet.
Week 2: Classify. For each tool, answer three questions: What data is being sent? Is it approved? Is there a corporate-tenant alternative? Score each tool as Green (approved), Yellow (needs controls), Red (block candidate).
Week 3: Decide and communicate. Put together the approved list, the restricted list, and the block list. Write a two-page AI tool policy. Communicate before enforcing. Surprise policies generate helpdesk tickets and workarounds.
Week 4: Enforce and monitor. Push the policies via SWG and CAC. Move approved tools to corporate-tenant-only access. Enable endpoint DLP in Monitor mode for the sensitive categories. Watch the first week of logs, tune.
After the first month, this becomes a monthly rhythm. New AI tools appear constantly. Discovery has to be continuous.
Governance decisions worth getting right
A handful of edge cases that trip most orgs up.
Personal accounts at work. The cleanest policy is: corporate tenant only, enforced at the CAC layer. Users can still use AI, but only inside the governed container.
API keys in dev tools. Developers generate keys, bake them into scripts, and sometimes commit them to Git. A separate workflow (secrets scanning, per-dev API budget controls) handles this better than a blanket AI block.
Browser extensions. Review permissions. Maintain a managed list for enterprise Chrome/Edge. Block the rest at the extension policy level, not just the URL level.
Agentic AI. Tools that execute actions, not just answer questions, are the next category. Same discovery pattern, much higher enforcement requirements. Plan for them now.
The dope.security enforcement layer, briefly
The three-layer AI governance stack maps directly to dope.security products:
- Discovery runs on dope.SWG logs, which surface corporate-vs-personal account usage for the major SaaS and AI platforms in dope.console.
- SWG policy allows, warns, or blocks AI destinations.
- Cloud Application Control restricts ChatGPT, Claude, Google, and Microsoft to corporate tenants only.
- Dopamine DLP inspects the prompt content on-device before it leaves, with three modes (Block, Monitor, Off).
All four in one console. Same agent. Same policy plane.
Start with discovery
The first thing to do isn't to buy a tool. It's to find out how much shadow AI you actually have. Most orgs are surprised by the result. The surprise is what gets the project funded.
Run a shadow AI discovery on your existing SWG logs, or let dope.security run one in a trial tenant for you. Either way, the report is the starting point.


.jpg)
.jpg)

