Most organizations already have employees using AI tools. The question is whether you have an AI use policy that governs how they're doing it. I've reviewed dozens of these policies over the past two years, from healthcare systems to defense contractors to financial services firms. The good ones share common characteristics: they're specific about what's allowed and what isn't, they acknowledge real risk without being paranoid, and they're written to be enforced rather than filed.
The bad ones read like compliance theater. They declare that AI shall be used "responsibly" and "ethically" without defining either term. They create approval processes so vague that no one knows who decides what. They prohibit "sensitive data" without explaining what that means in your environment.
An AI use policy serves three functions: it protects regulated data, it manages legal and reputational risk, and it gives employees clear guidance they can actually follow. If your policy doesn't accomplish all three, it's not working.
Why Generic AI Policies Fail in Regulated Environments
The pattern I see most often is organizations downloading a template, changing the company name, and calling it done. This fails because AI risk is context-dependent. What matters in healthcare is different from what matters in defense contracting, and a policy that tries to cover everything ends up addressing nothing.
In healthcare, the immediate concern is PHI entering an AI model that doesn't have a Business Associate Agreement in place. In defense work, it's CUI or ITAR-controlled technical data being sent to a cloud service that isn't FedRAMP authorized. In financial services, it's non-public personal information feeding a model that might use it for training. These are fundamentally different risks requiring different controls.
A working AI use policy starts with your regulatory obligations. If you handle HIPAA data, your policy needs explicit language about what constitutes a permitted use under your BAAs and what tools are pre-approved for PHI. If you're a defense contractor subject to CMMC or NIST 800-171, you need clarity on which AI services meet your system security plan requirements.
Generic policies also fail because they don't address the tools people are actually using. Your policy should name specific categories: large language models like ChatGPT or Claude, code completion tools like GitHub Copilot, AI features embedded in Microsoft 365 or Google Workspace, and any industry-specific AI applications. Employees need to know whether the AI assistant in their CRM is approved or whether it needs review.
Required Sections for an Enforceable AI Use Policy
Every AI use policy needs a section on scope and applicability. This answers who the policy applies to (employees, contractors, vendors with system access) and what systems or data types it covers. Be specific. "This policy applies to all uses of generative AI, automated decision-making systems, and AI-enabled tools that process company data or data belonging to our customers, patients, or partners."
You need a definitions section, but keep it practical. Define "AI system" broadly enough to capture new tools but specifically enough that people understand what you mean. Define "regulated data" in terms your organization actually uses: PHI, CUI, ITAR-controlled technical data, PII under state privacy laws, whatever applies to your environment. Don't make people translate compliance jargon into their daily work.
Approved Use Cases and Pre-Authorized Tools
This is where most policies get timid. They list broad categories like "productivity enhancement" or "research" without giving examples. That forces employees to guess, and most will either avoid AI entirely or use it anyway and not tell anyone.
A better approach is to be explicit about approved use cases and name specific tools that have been reviewed. Here's language from a policy I helped develop for a healthcare organization:
The following AI tools are approved for use with de-identified data only: ChatGPT Enterprise, Claude Pro, Microsoft Copilot (with commercial data protection enabled). Approved uses include drafting communications, summarizing non-clinical documents, generating code for internal tools, and research using publicly available information. These tools may not be used with PHI, patient names, medical record numbers, or any data that could be linked to an individual.
That paragraph does several things: it names specific tools so people know what's allowed, it defines the boundary clearly (de-identified data only), and it gives examples of permitted activities. An employee reading that knows whether their specific use case is covered.
For defense contractors, the equivalent section needs to address CUI and ITAR. You might specify that AI tools may be used for unclassified business functions but that any system processing CUI must be part of your assessed environment and meet NIST 800-171 controls. You might prohibit all AI use with ITAR-controlled technical data unless the tool is specifically authorized by your export control officer.
Prohibited Uses and Data Types
Be equally specific about what's not allowed. Don't write "don't use AI with sensitive data." Write "Do not input the following data types into any AI system unless specifically authorized in writing: protected health information, controlled unclassified information, ITAR-controlled technical data, social security numbers, financial account numbers, or any personal information subject to GDPR, CCPA, or other privacy laws."
List prohibited use cases: making decisions about hiring, firing, or promotion without human review; determining medical diagnoses or treatment plans; generating content that will be attributed to a licensed professional without review; creating documents that will be filed with regulators or courts without verification.
I've seen organizations get pushback on detailed prohibition lists because someone always asks "what if we need to use AI for that later?" The answer is that policies can be updated. It's easier to add a permitted use than to remediate a data breach caused by a vague policy that let someone make a judgment call.
Building an Approval Process That Works
Most organizations need two tracks: fast approval for low-risk use cases and a more thorough review for anything involving regulated data or automated decisions. The mistake is making everything go through a committee that meets monthly. That guarantees people will work around your policy.
For a fast-track process, define criteria that allow automatic approval. If someone wants to use a pre-authorized tool for a use case that doesn't involve regulated data, they can proceed without waiting. Require them to document what they're doing, but don't make them wait for permission.
For higher-risk use cases, your approval process should involve relevant stakeholders: privacy if it touches personal data, security if it requires new integrations, compliance if it affects regulatory obligations, legal if it creates liability concerns. But assign a single decision-maker. I've seen too many AI review committees where everyone has veto power and nothing gets approved.
A working approval workflow includes: a brief description of the proposed use, what data will be processed, what AI system will be used, who the vendor is and where data will be stored, what business purpose this serves, and what alternatives were considered. That's enough information to make a risk decision without requiring a full impact assessment for every request.
Documentation and Recordkeeping
Your policy should specify what records you're keeping and why. At minimum, maintain a register of approved AI systems with information about vendor contracts, data processing agreements, security assessments, and authorized use cases. When someone gets approval for a new use, that goes in the register.
For regulated industries, you need audit trails. If you're subject to HIPAA, document which AI tools have BAAs in place and what safeguards are applied. If you're a defense contractor, document how AI tools fit within your system security plan and which ones process CUI. If you're subject to the EU AI Act or emerging state laws, document your risk assessments and impact evaluations.
The documentation requirement also serves enforcement. When someone violates the policy, you need records showing they had access to the policy, received training on it, and acknowledged their obligations. This matters for both disciplinary actions and regulatory defense.
Building AI Governance That Works
Carl speaks to leadership teams and boards about implementing practical AI governance frameworks that balance innovation with risk management in regulated industries.
Book Carl to SpeakHandling Third-Party AI and Vendor Risk
Your AI use policy needs to address vendors and third parties because that's where most organizations have blind spots. Your employees might follow the policy perfectly while a vendor processes your data through an unapproved AI system, and you own that risk.
The policy should require that any vendor using AI to process your data must disclose it before doing so. This includes obvious cases like hiring an AI service provider and less obvious ones like your payroll company adding AI features or your electronic health record vendor enabling AI-assisted documentation.
Specify what your vendor management process looks like for AI: what questions you ask, what contract terms you require, what technical safeguards you expect. For healthcare organizations, this means ensuring BAAs explicitly cover AI processing. For defense contractors, it means confirming that vendor AI systems meet the same security standards as your own. For any organization subject to privacy laws, it means understanding whether AI processing is compatible with your data processing agreements.
The most mature approach I've seen is a separate addendum to vendor contracts specifically for AI. It defines what AI uses are permitted, requires notice before new AI features are enabled, specifies data retention and training restrictions, and creates audit rights. This is especially important because vendors change their AI capabilities constantly, and your initial contract review doesn't cover what they'll roll out next quarter.
A related consideration is the third-party risk management framework for AI vendors. Your policy should integrate with your broader vendor risk program rather than creating a separate track that no one maintains.
Training Requirements and Employee Awareness
An AI use policy that no one reads is just risk documentation for your next audit. Training needs to happen at multiple points: when the policy is first implemented, as part of new hire onboarding, and whenever the policy materially changes. But most organizations also need role-specific training.
For employees who handle regulated data, training should include examples of what an impermissible use looks like. Show them what happens if someone pastes a patient conversation into ChatGPT or uploads a document with CUI to a public AI service. Make it concrete.
For managers and team leads, training should cover how the approval process works and what their responsibilities are. They need to know when to escalate, how to identify prohibited uses on their teams, and what to do if someone violates the policy.
For procurement and vendor management, training should address contractual requirements, what to look for in vendor AI disclosures, and when to involve legal or compliance in vendor AI reviews.
The training should also cover the "why" behind restrictions. Employees who understand that you're prohibited from using certain AI tools because they don't meet BAA requirements are more likely to comply than employees who just see arbitrary rules. This is especially true for knowledge workers who are used to choosing their own productivity tools.
I recommend brief, scenario-based training over lengthy policy reviews. Give people five realistic scenarios and ask them what's permitted. Someone wants to use ChatGPT to summarize an RFP response—is that allowed? Someone wants to use an AI coding assistant to write a script that processes customer data—what's the approval process? The goal is to build judgment, not just policy awareness.
Enforcement and Consequences
A policy without enforcement is a suggestion. Your AI use policy needs clear consequences for violations, and those consequences need to be proportionate to the risk and applied consistently.
Distinguish between good-faith errors and reckless conduct. An employee who uses an approved AI tool but inadvertently includes regulated data is different from an employee who knowingly uploads CUI to a prohibited service. The first is a training issue; the second is a disciplinary matter.
Your policy should specify what happens when violations occur: immediate reporting requirements, investigation procedures, remediation steps, and potential disciplinary actions. For serious violations involving regulated data, you might need to notify customers, regulators, or affected individuals depending on what laws apply. Make sure your policy connects to your broader incident response plan.
Technical controls enforce policy better than documentation. If you don't want employees using unapproved AI services, block them at the network or endpoint level. If you need visibility into AI use, deploy tools that can identify AI traffic or monitor cloud service connections. If certain AI tools are approved only for de-identified data, implement DLP rules that prevent regulated data from reaching those services.
The challenge is that technical controls can't catch everything, especially with shadow IT and personal devices. Your policy needs both technical enforcement for high-risk scenarios and cultural enforcement through training, awareness, and accountability.
Monitoring and Audit
Your policy should explain how compliance will be monitored. This might include periodic reviews of AI tool usage, audits of approval documentation, testing of technical controls, and interviews with employees about their AI practices. For regulated organizations, this monitoring should tie into your broader compliance program.
In healthcare, monitoring might involve reviewing audit logs from approved AI systems to confirm only authorized users are accessing them and only de-identified data is being processed. In defense contracting, it might involve verifying that all AI systems processing CUI are documented in your system security plan and subject to the same continuous monitoring as other systems.
Set a review schedule for the policy itself. AI capabilities and risks change quickly, and your policy needs to keep pace. I recommend quarterly reviews for organizations in rapidly-evolving industries or with significant AI adoption, and at least annual reviews for everyone else. Include a trigger for interim reviews if there's a significant AI-related incident in your industry or a new regulation affecting AI use.
Speaking on AI Governance and Risk Management
Carl delivers keynotes and workshops on building practical AI governance programs for regulated organizations. See all keynote speaking topics or reach out about your event.
Book Carl for Your EventPolicy Language That Actually Works
The difference between a policy that gets followed and one that gets ignored often comes down to how it's written. Avoid passive voice and compliance jargon. Instead of "regulated data shall not be processed by unauthorized AI systems," write "Do not input patient data, CUI, or other regulated information into AI tools unless the tool is listed in Appendix A as approved for that data type."
Use examples liberally. After each major requirement, include a concrete scenario. "Example: You're writing a proposal response and want to use ChatGPT to improve the executive summary. Because the proposal contains no regulated data, this is permitted using ChatGPT Enterprise. You should document your use in the AI use log."
Structure the policy so people can find answers quickly. Use a Q&A format for common scenarios, include a decision tree or flowchart for the approval process, and provide quick-reference tables for approved vs. prohibited tools. The policy document might be 15 pages, but someone should be able to answer their specific question in under two minutes.
Here's a clause structure I've seen work well for data restrictions:
Never input the following data into any AI system without written approval from the Privacy Officer: names, social security numbers, dates of birth, medical record numbers, diagnosis codes, treatment information, financial account numbers, or any other information that could identify an individual. If you're unsure whether data is regulated, ask before using any AI tool. De-identified data may be used with pre-approved tools listed in Appendix A.
That's clear, specific, and actionable. Compare it to "AI systems must not process sensitive data without appropriate safeguards." The second version sounds official but gives employees no guidance.
Address common questions directly in the policy. Can I use AI to draft emails? Can I use AI to summarize meetings? Can I use AI coding assistants? Can I use the AI features built into Microsoft 365? Answering these upfront prevents the endless stream of clarification requests that bogs down your approval process.
Connecting Policy to Your Broader AI Governance Framework
An AI use policy doesn't exist in isolation. It's one component of your broader AI governance framework, which also includes risk assessment processes, technical controls, vendor management, model validation, and ongoing monitoring. The policy sets the rules; the framework provides the structure to implement and maintain them.
Your policy should reference other governance documents without duplicating them. Point to your data classification policy for definitions of regulated data types. Point to your vendor management policy for procurement requirements. Point to your incident response plan for what to do when something goes wrong. This keeps the policy focused and maintainable.
The policy also needs clear ownership. Assign a specific role or person responsible for maintaining it, approving exceptions, and answering questions. In most organizations, this sits with the CISO or Chief Privacy Officer, but it might be a dedicated AI governance committee or a senior data leader. What matters is that everyone knows who owns it.
For organizations subject to the EU AI Act or emerging state-level AI regulations, your policy becomes part of your compliance documentation. You'll need to demonstrate that you have governance processes in place for high-risk AI systems, that you're conducting required impact assessments, and that you're maintaining appropriate records. A well-structured policy makes that documentation much easier.
The questions executives should be asking about AI deployment map directly to your policy structure. If leadership wants to know how you're managing AI risk, your policy should provide clear answers: here's what we allow, here's what we prohibit, here's how we enforce it, here's how we monitor compliance.
Making Policy Implementation Actually Happen
Writing the policy is the easy part. Implementation is where most organizations struggle because it requires changing behavior, not just documenting requirements. The policy rollout needs executive support, clear communication, accessible training, and visible enforcement.
Start with leadership. Your executives need to understand what the policy requires, why it matters, and what happens if it's not followed. They also need to model compliance. If the C-suite is using prohibited AI tools, no one else will take the policy seriously.
Communicate the policy in multiple formats. Send a company-wide announcement, hold team meetings, create quick-reference guides, post FAQs on your intranet, and make the full policy easily accessible. Assume people will need to encounter the information three or four times before it sticks.
Provide easy ways for employees to ask questions and request approvals. Set up a dedicated email address or intake form, staff it with someone who can respond within a business day, and track requests so you can identify common issues. If the approval process takes weeks, people will work around it.
Expect resistance, especially from teams that have been using AI tools without oversight. Some will argue that policies slow innovation or that the restrictions are too strict. The response is not to weaken the policy but to demonstrate that it's possible to use AI within the boundaries you've set. Find early adopters who can show how to use approved tools effectively and share their successes.
Plan for policy exceptions because reality is messy. You'll have legitimate use cases that don't fit the standard approval process, vendors that need custom arrangements, or business opportunities that require rapid decisions. Build an exception process into your policy, require specific approvals and documentation for exceptions, and review them regularly to see if the policy needs updating.
Finally, measure compliance. Track approval requests, monitor AI tool usage through technical controls, survey employees about their AI practices, and review incidents or violations. Use that data to refine your policy, improve training, and demonstrate to leadership that your governance framework is working.
The Strategic Value of a Working Policy
Organizations often treat an AI use policy as a compliance checkbox, but it's actually a competitive advantage. A clear, enforceable policy lets you adopt AI faster than competitors who are paralyzed by uncertainty. It gives customers and partners confidence that you're managing AI risk appropriately. It protects you from regulatory action and reputational damage. And it allows your employees to use powerful tools without constantly worrying about whether they're breaking rules.
The organizations that succeed with AI aren't the ones with the most permissive policies or the most restrictive ones. They're the ones with the clearest boundaries and the strongest execution. They've thought through what AI means for their risk profile, built policies that address real threats without creating unnecessary friction, and implemented governance processes that actually work.
In my experience, the difference between organizations that implement effective AI governance and those that don't comes down to leadership commitment. A CISO can write the perfect AI use policy, but if the CEO is using prohibited tools or the business units are allowed to bypass the approval process, the policy becomes meaningless. This needs to be a leadership-level priority with accountability at the executive level.
As regulations mature and enforcement actions begin, the organizations with working AI governance will be fine. The ones still treating this as a theoretical risk will be explaining to regulators, customers, and boards why they didn't take it seriously sooner. Your AI use policy is where that governance starts.