Skip to main content
Practical Guide · 7 min read

What to Put in an AI Use Policy (Even If It’s Just One Page)

You know you should have an AI use policy. Maybe a client asked about it, maybe you read something alarming about data leaks, or maybe you just realized that people on your team are using ChatGPT, Claude, and Copilot in ways nobody’s talked about.

The good news: writing an AI use policy is not hard. The bad news: most people overthink it, write something nobody reads, or never start at all.

This guide walks you through what to include, what to skip, and how to write something useful in an afternoon.

Why Bother With a Written Policy?

Three reasons.

Clarity. People are already making decisions about what AI tools to use and what data to put into them. Without a policy, everyone is guessing. A written policy turns implicit assumptions into explicit rules.

Protection. If something goes wrong — a data leak, a biased output, a confidentiality breach — having a policy in place shows you weren’t negligent. You thought about the risks and communicated expectations. This matters to clients, partners, and regulators.

Scalability. When it’s just you or a small team, everyone can talk through decisions informally. That breaks down as you grow. A policy gives you a reference point that doesn’t depend on everyone being in the room.

The Core Sections

A useful AI use policy needs five things. You can cover all of them in one to two pages.

1. Approved tools. Name the AI tools people are allowed to use. Be specific: "ChatGPT (Team plan), GitHub Copilot, and Grammarly" is better than "AI tools approved by management." If there are tools that are explicitly not approved, say so. If the answer is "check with [person] before using something new," say that.

2. Data rules. This is the most important section. What data can go into AI tools and what can’t? At minimum, define these categories: customer/client data (usually no), proprietary business information (probably no), personal data (no, unless the tool’s privacy terms allow it), and general business content like drafts, brainstorms, and internal summaries (usually fine). Be specific about what "sensitive data" means in your context.

3. Output rules. AI generates content, but it doesn’t guarantee accuracy. Your policy should say when AI-generated content needs human review before it goes out — especially for anything customer-facing, published externally, or used in important decisions. A good default: all external-facing AI-generated content must be reviewed by a human before use.

4. Prohibited uses. Name the things people should never do with AI. Common examples: making final hiring or firing decisions based solely on AI output, submitting AI-generated content as original work where that’s not disclosed, using AI for tasks involving attorney-client privilege or legally protected information, and generating content that impersonates real people.

5. Who to ask. Every policy needs a point of contact. Who do people go to when they’re not sure whether a use case is okay? This can be a person, a Slack channel, or an email alias. The point is that people know where to get an answer rather than guessing.

What to Skip (For Now)

Your first policy does not need to cover:

• Detailed technical specifications of how AI models work • A comprehensive risk assessment framework • Training curricula for the whole team • Vendor evaluation matrices • Regulatory analysis by jurisdiction

These are all valuable things. They’re also second-order concerns. The first-order concern is getting basic guardrails in place so people know what’s okay and what’s not. You can build on the policy later.

Mistakes People Make

Writing it like a legal document. If your policy reads like a terms of service agreement, nobody will read it. Write it in plain English. Short sentences. Clear rules. If a new team member can’t understand it in five minutes, it’s too complicated.

Not sharing it. A policy in a Google Doc that nobody knows about is not a policy. Send it to everyone. Mention it in onboarding. Pin it in Slack. The whole point is that people read it and follow it.

Making it too restrictive. If your policy bans all AI use, people will use AI anyway and just not tell you. That’s worse than having no policy at all because now you have unmanaged risk and no visibility into it. A good policy enables smart use while setting clear boundaries.

Never updating it. AI tools change fast. Your policy should change too. Review it every quarter, or whenever you adopt a major new tool or use case.

A Starter Template

Here’s a structure you can copy and fill in:

AI Use Policy — [Your Name / Company Name] *Last updated: [Date]*

Approved Tools: [List specific tools and plans]

Data Rules: • Do NOT put customer data, personal information, or confidential business data into AI tools • General business content (drafts, brainstorms, internal summaries) is fine for approved tools • When in doubt, ask [contact person] before using a tool with sensitive information

Output Review: • AI-generated content for external use (client work, published content, official communications) must be reviewed by a human before sending • AI-generated code should go through normal code review

Prohibited Uses: • [List your specific prohibitions]

Questions? Contact [person/channel]

That’s it. Fill it in, share it, and you’re ahead of most people.

What Comes Next

Once you have a basic policy in place, the natural next steps are reviewing the privacy terms of every AI tool you use, doing a lightweight risk review of your highest-impact AI use cases, and training your team on the policy (even a 15-minute walkthrough counts).

But don’t let "next steps" delay getting the first version out. A simple policy shared today is more valuable than a perfect policy planned for next quarter.

Key Takeaways

  • A useful AI use policy needs five things: approved tools, data rules, output review requirements, prohibited uses, and a point of contact
  • Write it in plain English, keep it to one or two pages, and actually share it with everyone
  • Don’t let perfect be the enemy of good — a simple policy now beats a comprehensive one never
  • Review and update the policy quarterly or when your AI usage changes significantly

Related Regulations

Sources & References

Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.

Related Resources