ChatGPT at work: the three-layer AI governance stack every CISO should own

ChatGPT at work: the three-layer AI governance stack every CISO should own

Every CISO has had the same meeting in the last two years. Someone from legal or HR brings up ChatGPT. The engineering leaders push back. The security team gets asked: "So what's our policy?"

The wrong answer is "we block it." The wrong answer in a different shape is "we'll monitor it and see." The right answer is a three-layer governance stack that most orgs haven't assembled yet.

This post is the playbook.

Why "just block it" doesn't work

When a company blocks ChatGPT at the web proxy, three things happen within a week.

One: Engineers and knowledge workers discover the workarounds. They switch to Claude. They use Gemini in the browser. They ask Copilot in M365. They open the tools on their personal phones.

Two: The workarounds move outside your visibility. Whatever prompt was about to go into the official ChatGPT window now goes into a personal account on a personal device, with zero logs, zero DLP, and zero policy.

Three: The productivity impact shows up in retention surveys a quarter later. Engineers rate "tools I need to do my job" lower. You haven't reduced AI use. You've reduced AI use you can govern.

Blocking is a failure mode dressed up as a policy. The goal isn't no-AI. The goal is zero-risk AI productivity, and that takes layers.

The three-layer stack

Think of it as a pipeline. Each layer handles a specific class of risk and hands the rest to the next layer.

Layer 1: Shadow IT discovery. Know what's happening.
Layer 2: Secure Web Gateway policy. Allow, warn, or block at the URL level.
Layer 3: Cloud Application Control. Restrict to approved tenants only.
Layer 4 (optional but recommended): Endpoint DLP on the prompt itself.

We'll walk each.

Layer 1: Shadow IT discovery

You can't govern what you can't see. The first move is visibility.

A Secure Web Gateway that can inspect SSL on-device gives you full HTTPS log fidelity for every AI tool anyone uses. From those logs, you can answer:

  • Which AI tools is our workforce actually using? (Usually 10 to 30 different ones, not just ChatGPT.)
  • Who's using them? (By department, by seniority, by team.)
  • Are they using corporate or personal accounts? (The distinction is everything.)

Corporate account + corporate tool = governable. Personal account + corporate tool = a policy problem. Personal account + personal tool on a personal session = outside your visibility entirely.

In dope.security's SWG logs, you can filter by AI category and see which OAuth flows are going to your corporate tenant versus a personal one. That's the input to Layers 2, 3, and 4.

Layer 2: Secure Web Gateway policy

Once you know what's being used, you make policy.

The options, in order of increasing restriction:

Allow. The tool is sanctioned. Users get full access. You're still logging.

Warn. Users hit an interstitial that reminds them of AI policy before proceeding. Use this for emerging tools where you haven't decided yet, or for categories where you want a light touch.

Block. Users are prevented from accessing the tool entirely. Use sparingly, because of the workaround problem above. Block makes sense for tools that have failed a risk review or that are known to be high-exposure.

Allow with restrictions. This is where Layer 3 comes in.

A good SWG policy for AI tools looks less like a binary switch and more like a gradient. Dev teams get Claude and Copilot. Marketing gets ChatGPT Enterprise. Finance gets the firm's internal-only AI assistant. And a warn banner on everything else.

Layer 3: Cloud Application Control (CAC)

This is where most governance programs either win or fail.

Cloud Application Control is tenant-level policy for SaaS. Instead of "users can or can't reach chatgpt.com," it's "users can reach chatgpt.com, but only while signed in to our enterprise tenant." Personal logins get blocked at the identity level, not the URL level.

Why it matters:

  • Your company pays for ChatGPT Enterprise. Those seats come with DLP, audit logs, admin controls, data residency. The value of that contract is zero if employees log in with personal OpenAI accounts instead.
  • Personal ChatGPT sessions are outside every enterprise control. Prompts get stored. Training may include them. There's no audit trail for HR or compliance.
  • CAC gets you both outcomes at once: people use AI, and they do it inside your governed container.

In dope.security, Cloud Application Control restricts access to approved SaaS tenants for ChatGPT, Claude, Google Workspace, Microsoft 365, and others. Employees can sign in to the corporate tenant only. Personal tenant logins are blocked at the edge.

This is the layer most vendors don't talk about because most vendors can't do it.

Layer 4: Endpoint DLP on the prompt

The final layer. This one is optional in theory and mandatory in practice for regulated industries.

Even inside a corporate ChatGPT Enterprise tenant, a user can paste a source code file with hard-coded credentials into the prompt. The tenant won't stop them. Nothing on the server side knows the credentials were sensitive until they're already in OpenAI's systems.

Dopamine DLP intercepts the prompt on-device, before submit, classifies it, and enforces policy. Dopamine DLP does this in Block, Monitor, or Off modes. Classification runs against zero-retention OpenAI APIs, so the DLP itself doesn't become a training-data risk.

When the classifier flags a prompt, Dopamine DLP shows the user a plain-English explanation of what was detected and why. The user learns. The org gets protection. The compliance team gets an audit trail.

A decision tree for real teams

Not every team needs all four layers. A rough decision tree:

  • Everyone: Layer 1 (visibility) and Layer 3 (CAC to restrict to corporate tenants).
  • Teams handling PII, PHI, or IP: Add Layer 4 (endpoint DLP on prompts).
  • Regulated teams (finance, healthcare, defense): All four layers, Block mode on Layer 4 by default, with exceptions.
  • General knowledge workers: Layers 1 and 3, plus Monitor mode DLP for awareness without friction.
  • Teams experimenting with new AI tools: Layer 2 in Warn mode, Layer 3 if the tool supports SSO.

Map your teams. Map your data. Map your risk tolerance. Then assemble the stack.

Why "just block it" really fails

Blocking fails because it treats AI as a single risk: exposure of a prompt to a third party. In reality, there are five distinct risks:

  1. Leakage of sensitive data in a prompt.
  2. Use of personal accounts outside enterprise governance.
  3. Use of unvetted AI tools with unknown data practices.
  4. Compliance gaps (audit trails, data residency).
  5. Reputational and legal exposure from uncontrolled output.

Blocking addresses risk one, badly, by pushing the activity off-device. The three-layer stack addresses all five.

What dope.security brings to this

All four layers are in one platform.

  • Shadow IT discovery runs on dope.SWG logs. OAuth provider and personal-vs-corporate account breakdowns come out of the box.
  • SWG policy (allow, warn, block) is configured in dope.console and pushes to every dope.endpoint in seconds, not the 30 to 60 minutes of legacy polling.
  • Cloud Application Control restricts to approved tenants for ChatGPT, Claude, Google, and Microsoft 365. Datasheets for ChatGPT and Claude CAC are on the site.
  • Dopamine DLP inspects prompts on-device, in three modes (Block, Monitor, Off), backed by US Patent 12,464,023 and zero-retention classification APIs.

One console. One agent. One policy plane. One place for the CISO to see the whole stack.

Get the playbook

If you want the expanded walkthrough with screenshots and the actual policies we've rolled out at customer orgs, we'll walk you through it in 20 minutes.

Read more to see how we Manage AI

AI Security
AI Security
Cloud App Control
Cloud App Control
back to blog Home