AI Usage Policy: What Goes In It, and How to Actually Enforce It Without Killing Productivity

AI Usage Policy: What Goes In It, and How to Actually Enforce It Without Killing Productivity

An AI usage policy is the document most organizations wrote in 90 minutes in 2023, posted on the intranet, and never enforced. It usually said something like "use AI responsibly, don't paste customer data into ChatGPT, ask IT if you're not sure." That's not a policy. That's a wish.

In 2026, the wish version has aged badly. Employees are using AI every day, agents are running inside browsers, and the data leaking out the door doesn't look like the data your DLP was built to catch. The policy you publish has to do real work. It has to draw clear lines, point to specific tools, and connect to controls that enforce those lines without slowing the team down.

This is a practical guide to writing an AI usage policy that actually gets implemented. What goes in the document, what does not go in it, and the three-layer enforcement model that makes it real.

What goes in an AI usage policy

Eight sections, kept short. If your policy runs longer than two pages, no one will read it past the first page.

1. Scope

Which people, which devices, which tools. Be explicit about contractors, third parties, and BYOD if they exist. Most policies fail by leaving these ambiguous.

2. Approved tools

A named list, by category. "Generative chat: Claude (corporate tenant), ChatGPT Enterprise. Coding: GitHub Copilot (corporate), Cursor (corporate). Meeting AI: Fireflies (corporate)." Specific. Updated quarterly. Personal versions of any of these are not on the list.

3. Restricted and prohibited tools

Browser extensions with broad permissions. Free LLM clients that send prompts to unknown servers. AI agents that authenticate to a personal Google or Microsoft account. Name them where you can. Where you can't, name the categories.

4. Data classes and rules

Map your data classification (Public, Internal, Confidential, Restricted, Regulated) to AI usage. Public: anything goes. Internal: corporate tenants only. Confidential: corporate tenants with logging, no third-party agents. Restricted and Regulated: do not use AI tools at all, or only specific approved tools with DLP in Block mode.

5. Account requirements

Corporate-tenant accounts only. No personal logins on managed devices. This is the single most important line in the document, and it's the one that requires real enforcement to mean anything.

6. Logging and retention

What gets logged (SWG events, CASB activity, DLP violations, AI tool access patterns). Where it goes (SIEM, retention period). Who can query it (security only, by role, with audit).

7. Exceptions and how to request them

Real teams have real edge cases. Marketing wants Midjourney for a campaign. A researcher needs Perplexity for primary source work. Build an exception path that takes hours, not weeks. If exceptions take six weeks, employees will just go around the policy.

8. Consequences and escalation

What happens if a policy violation fires. First-time educational, second-time manager involvement, third-time HR. Spell it out. Ambiguous consequences produce ambiguous compliance.

What does not go in an AI usage policy

Two anti-patterns.

Aspirational language without controls. "Employees will use good judgment when interacting with AI tools." That's not a policy. That's a sentence on a poster. Every rule in the policy has to point to a control or it's noise.

Tool bans that aren't enforceable. "Personal ChatGPT is prohibited." Without enforcement at the network or endpoint layer, that line is decorative. If you can't block it, don't ban it. Either build the control first or write a softer rule that the team will actually follow.

The three-layer enforcement model

The policy is a document. Enforcement is three layers of controls working together.

Layer 1: Discover what your fleet is actually using

Before you can govern AI use, you need to see it. A Secure Web Gateway that inspects SSL on-device gives you full log fidelity for every AI request: destination, time, user, OAuth provider. Sort by domain, filter to AI categories, count distinct tools and accounts.

Most teams find 20 to 40 distinct AI endpoints in a month the first time they look. Far more than the IT-approved list. Shadow AI discovery walks through how to operationalize this.

Layer 2: Restrict access at the tenant layer

This is the layer most policies skip and most violations happen in. Cloud Application Control restricts access to your enterprise ChatGPT, Claude, Google, and Microsoft tenants while blocking personal logins on the same domain.

A user can still use ChatGPT, but only the corporate instance. The personal version is blocked at the OAuth layer, on-device, before the request leaves the laptop. We've covered this for ChatGPT specifically. The same control applies to Claude, Gemini, and Microsoft 365.

Layer 3: Inspect prompts and uploads inline

Discovery and tenant control set the rails. DLP enforces what runs on them. Endpoint DLP intercepts AI prompts and file uploads on the device, classifies content using language models, and applies Block, Monitor, or Off based on policy.

The classification step matters. Pattern-matching DLP misses paraphrased content. LLM-based classification reads the prompt and decides if it contains regulated data, even when the user didn't paste a recognizable credit card number.

Three layers, working together: discovery, tenant control, inline DLP. That's how a policy goes from a document to an enforced reality.

A short AI usage policy template

If you're starting from a blank page, this is a skeleton you can edit in an afternoon.

1. Scope. This policy applies to all employees, contractors, and any third parties using AI tools on company-managed devices or with company data.

2. Approved AI tools. [Named list, by category, updated quarterly.] All approved tools must be accessed through the corporate tenant. Personal accounts on the same services are not approved use.

3. Prohibited use. Pasting Restricted or Regulated data into any AI tool, including approved ones. Using AI tools accessed through personal accounts. Installing browser extensions or agents that access company data without explicit IT approval.

4. Data class rules. [Map data classes to AI tool rules.]

5. Logging. All AI tool access is logged through the corporate SWG, CASB, and endpoint DLP. Logs are retained for [X] days. Access is limited to security and audit.

6. Exceptions. Request exceptions through [ticket queue]. SLA: [X business hours].

7. Consequences. First violation: educational. Second: manager involvement. Third: HR escalation.

Then make sure each section is wired to a control.

How to roll it out without killing productivity

Five practical moves.

Start in Monitor mode. Turn on DLP and discovery without blocking anything for two weeks. Learn what the team is actually doing. The dashboard alone will surface 80% of the conversations you need to have.

Communicate before enforcing. A surprise policy generates helpdesk tickets and workarounds. A communicated policy with a one-week heads-up generates compliance.

Make the approved path obvious. If the corporate Claude tenant is harder to use than personal Claude, employees will use personal Claude. Friction kills compliance.

Iterate, don't litigate. Policies that are revised quarterly stay alive. Policies that are written once and never touched die at the first edge case.

Pair the policy with training. Twenty minutes, recorded, available on-demand. Show what's allowed, what's not, what happens when a rule fires, and how to request an exception.

Where dope.security fits

The three enforcement layers map directly to dope.security products: discovery in dope.SWG and CASB Neural logs, tenant-level enforcement in Cloud Application Control, and on-device inspection in Dopamine DLP. One agent, one console, one policy plane. Agentic AI falls under the same governance pattern.

If your AI usage policy is a Confluence page nobody opens, the gap between it and the laptop is what we close. Book a 20-minute demo or start an instant trial.

Be bold. Be passionate. Be dope.

AI Security
AI Security
Cloud App Control
Cloud App Control
Compliance
Compliance
How-To
How-To
back to blog Home