Your vendor management program probably wasn't designed for AI. The questionnaires ask about data encryption and access controls, the contracts include standard indemnification clauses, and the risk scoring rubric treats a chatbot vendor the same way it treats your payroll provider. That worked fine when vendors delivered defined services with predictable behaviors. AI vendors don't fit that model.
I've reviewed dozens of AI vendor assessments over the past year, and the pattern I see is consistent: organizations apply their existing third-party risk frameworks to AI vendors and miss the actual risks. They verify SOC 2 compliance, confirm the vendor has a security team, and check boxes on a standardized questionnaire. Then they're surprised when the AI system hallucinates customer data in a support ticket, when model updates break a critical workflow, or when they can't explain to regulators how the vendor's algorithm made a decision about a patient.
AI vendor risk isn't just traditional vendor risk with a new technology label. The risk profile is fundamentally different, and your vendor management program needs to account for that difference.
Why AI Vendors Break Your Existing Risk Framework
Traditional vendor risk management assumes you can define what the vendor will do, how they'll do it, and what data they'll touch. You write requirements, the vendor meets them, and you audit compliance. The vendor's service is deterministic: given the same input, you get the same output.
AI systems don't work that way. The output isn't fully predictable. The vendor may not be able to explain why the system produced a particular result. The model changes over time through updates or continuous learning. The data the system was trained on—which fundamentally shapes its behavior—may be proprietary, purchased from third parties, or scraped from public sources the vendor can't fully document.
Your vendor questionnaire asks whether they encrypt data in transit and at rest. That's necessary but insufficient. The real question is what happens to your data once it's inside the model. Is it used for training? Can it influence outputs shown to other customers? Can the vendor delete it on request, or is it now permanently embedded in model weights? Most standard questionnaires don't ask these questions because they weren't written with AI in mind.
In healthcare and defense contracting, where I spend most of my time, this gap creates real compliance exposure. A HIPAA-covered entity needs to know not just that an AI vendor will sign a BAA, but whether their platform architecture can actually honor the agreement's requirements. A defense contractor subject to ITAR needs to understand whether the AI model was trained on data that includes foreign national contributions, and whether model updates could introduce unauthorized technical data. These aren't theoretical concerns—I've seen both create compliance problems that standard vendor assessments missed entirely.
The Questions Your AI Vendor Assessment Must Answer
Your existing vendor questionnaire needs a new section, and it needs to be mandatory for any vendor providing AI capabilities. These questions aren't exhaustive, but they address the gaps I see most often:
Model Training and Data Lineage
Start with the foundation: what data trained this model, and where did it come from? Ask the vendor to describe their training data sources. Public datasets are one thing; proprietary data purchased from aggregators is another. If they scraped web data, did they respect robots.txt files? Did they include data from jurisdictions with restrictions on AI training?
For regulated industries, this matters legally. The NIST AI Risk Management Framework emphasizes data governance, and the EU AI Act creates explicit requirements around training data documentation. But beyond compliance, you need this information for risk assessment. If you're a healthcare organization and the vendor's model was trained on publicly available medical literature that includes outdated or debunked treatments, that's a clinical risk.
Ask whether your data will be used for training. The answer should be explicit and documented in the contract. "We take privacy seriously" is not an answer. "Customer data is not used for model training unless the customer opts in" is an answer. Get it in writing.
Model Behavior and Outputs
How does the vendor test for bias, hallucinations, and harmful outputs? What's their process for identifying and mitigating these issues before releasing a model update? Have they conducted red teaming or adversarial testing?
This is where many vendors get vague. They'll mention "rigorous testing" without defining what that means. Push for specifics. What datasets do they use for bias testing? What demographic categories do they test for? What's their threshold for acceptable performance disparity between groups?
For hallucinations—instances where the AI confidently generates false information—ask about their measurement methodology. Do they test against ground truth datasets? What's their observed hallucination rate? What happens when a customer reports a hallucination?
If you're considering AI for decision-making that affects people—approving benefits, screening resumes, prioritizing support tickets—these questions become critical. The questions executives should ask before deploying AI apply equally when evaluating a vendor's system.
Explainability and Auditability
Can the vendor explain why their system produced a particular output? This isn't an academic question. Regulators in healthcare, financial services, and government contracting increasingly expect organizations to be able to explain automated decisions. If your vendor can't provide that explanation, you're carrying the risk.
Ask what tools or techniques they use for explainability. Do they use attention mechanisms, SHAP values, or other interpretability methods? Can they provide explanations at the individual output level, or only aggregate statistics about model behavior?
For auditability, ask whether they maintain logs of inputs, outputs, and model versions. Can they reconstruct what version of the model was running on a specific date? If you discover a problem, can they help you identify all affected outputs?
Model Updates and Versioning
How often does the vendor update their model? What's their process for testing updates? Do customers have the option to stay on a previous version while testing a new one?
I've seen organizations deploy an AI vendor, validate its outputs, and then have the vendor push a major model update that changes behavior significantly. The organization didn't know the update was coming and didn't have a chance to revalidate. That's a governance failure, but it's also a vendor management failure—the contract should have addressed this.
Ask about rollback procedures. If an update causes problems, how quickly can they revert to the previous version? Is rollback automatic if error rates spike, or does it require a customer request?
Third-Party Model and Data Providers
Many AI vendors don't build their own foundation models. They use OpenAI, Anthropic, Google, or others as the underlying engine and add their own layer on top. That's fine, but you need to know about it because it affects your risk profile.
Ask which third-party AI services they use. Ask whether your data passes through these services and under what terms. If they're using a major provider's API, understand that provider's data use policy—it may not match what your vendor is promising you.
The same applies to data providers. If the vendor enriches their service with third-party data, you need to know the source and terms. This is especially important for AI systems that generate content, where the training data provenance affects copyright risk.
Help Your Leadership Team Understand AI Risk
Carl delivers keynotes that translate AI vendor risk, governance, and compliance into strategic terms your executives and board members can act on—without the hype or vendor spin.
Book Carl to Speak
Contract Terms That Actually Matter for AI
Standard vendor contracts weren't written with AI in mind. You need additional terms that address AI-specific risks. These should be non-negotiable for any AI vendor processing sensitive data or supporting critical functions.
Data Use and Training Restrictions
The contract must explicitly state whether your data can be used to train or improve the vendor's models. If the answer is no, the language needs to be absolute and specific. "Customer data will not be used for model training" is clear. "Customer data may be used to improve our services" is ambiguous—improving services might include training.
For regulated industries, this section should reference applicable legal requirements. If you're subject to HIPAA, the contract should state that no protected health information will be used for training, even in de-identified form, without authorization. If you're subject to export controls, the contract should address restrictions on where training occurs and who has access to training data.
Model Transparency and Documentation
Require the vendor to provide documentation about model architecture, training data categories, known limitations, and bias testing results. This doesn't mean they have to disclose proprietary algorithms, but they should be able to describe their system's capabilities and limitations in sufficient detail for you to assess fit and risk.
The contract should specify that this documentation will be updated when material changes occur—new model versions, significant training data additions, or discovered limitations. Define what constitutes a material change and require advance notice.
Performance Standards and Monitoring
Standard SLAs measure uptime and response time. For AI vendors, you need additional metrics: accuracy rates, false positive and false negative rates, bias metrics across relevant demographic categories, and hallucination rates.
The contract should specify baseline performance levels and require the vendor to maintain them. It should also require the vendor to notify you if performance degrades below the baseline, and give you the right to audit their measurements.
This is difficult because AI performance isn't constant—it varies based on inputs and use cases. That's why the baseline should be defined against a specific test dataset you both agree on, and measured regularly. If the vendor's system performs at 95% accuracy on that test set today, the contract should require them to maintain that level.
Change Management and Update Controls
Require advance notice of model updates, with enough detail to assess impact. The notice period should give you time to test the update in your environment before it goes to production. For critical systems, you should have the right to defer updates or roll back if problems occur.
The contract should specify what constitutes an emergency update that can bypass normal notice requirements—security vulnerabilities qualify, but the vendor shouldn't be able to label every update as an emergency to avoid governance.
Audit Rights and Incident Response
Your audit rights need to extend beyond traditional security controls. You should have the right to audit the vendor's AI testing processes, bias measurement methodologies, and data handling practices. For vendors supporting regulated industries, this should include the right to bring in third-party auditors.
Incident response provisions should cover AI-specific incidents: bias discovered in production, hallucinations causing harm, unauthorized training data use, or model behavior that violates your policies. The contract should define notification timelines and remediation requirements for these scenarios.
Liability and Indemnification
Standard indemnification clauses may not cover AI-related harms. If the vendor's AI system produces biased outputs that lead to a discrimination claim, or hallucinates information that causes financial harm, who bears the liability?
The contract should address this explicitly. At minimum, the vendor should indemnify you for claims arising from defects in their AI system, including bias, hallucinations, and other output errors, provided you used the system according to their documentation. The vendor will push back on this, arguing that AI outputs aren't guaranteed. That's fine—neither are the outputs of traditional software, but vendors still provide warranties and indemnities.
How AI Vendor Risk Differs by Industry
The baseline questions and contract terms apply across industries, but specific sectors face additional AI vendor risk concerns.
Healthcare
Healthcare organizations using AI vendors face HIPAA obligations that standard vendor management often misses. It's not enough for the vendor to sign a Business Associate Agreement—you need to verify they can actually comply with its terms.
Ask whether their model architecture allows them to delete or return PHI on request. Many AI systems can't selectively remove data from trained models. If your BAA requires the vendor to return or destroy PHI at contract termination, and their model was trained on that PHI, they may not be able to comply. This creates a compliance gap that signing a BAA alone doesn't solve.
For clinical decision support AI, ask about the evidence base. Was the model trained on diverse patient populations? Has it been tested for performance disparities across demographic groups? What clinical validation has been performed? These questions aren't just risk management—they're patient safety issues.
Defense and Government Contractors
Defense contractors evaluating AI vendors need to consider export control and supply chain security implications that commercial risk assessments don't address. If you're subject to ITAR or EAR, you need to understand where the vendor's model training occurred, who had access to training data, and whether model updates could introduce foreign national contributions.
For contractors pursuing CMMC certification, AI vendors fall under your supply chain security requirements. The vendor needs to meet the same security standards required of other suppliers handling CUI. But AI vendors present additional challenges: if they're providing a cloud-based AI service, their infrastructure security becomes your compliance responsibility. Supply chain security expectations from primes apply to AI vendors just as they apply to any other subcontractor.
Ask whether the vendor's employees or contractors with access to your data have been screened consistent with your security requirements. Ask where model training and inference occur geographically. Ask whether they can provide a system exclusively for your use, isolated from other customers, if your data sensitivity requires it.
Financial Services
Financial institutions face model risk management requirements that apply to AI just as they apply to traditional models. If you're using a vendor's AI for credit decisions, fraud detection, or trading, your regulators expect model validation, ongoing performance monitoring, and documentation of limitations.
Ask whether the vendor can provide the documentation your model risk management framework requires. Can they describe the model's theoretical basis? Can they provide performance statistics across different scenarios? Can they explain known limitations?
For consumer-facing AI, ask about adverse action notices and explainability. If the AI denies a credit application or flags a transaction as fraud, can you provide the consumer with specific reasons? The vendor's explainability capabilities directly affect your regulatory compliance.
Training That Addresses Real AI Governance Challenges
Your team needs to understand how AI changes vendor management, compliance, and risk—not just in theory, but in practice. Carl's keynotes and workshops deliver actionable frameworks your organization can implement immediately.
Book Carl for Your Event
Building an AI-Specific Vendor Assessment Process
Your vendor management program needs a dedicated AI track that runs parallel to your standard process. Not every AI vendor will require the full assessment—a vendor providing a simple content moderation API needs less scrutiny than one providing clinical decision support—but you need a structured way to determine the level of review required.
Start with categorization. Develop criteria that determine whether a vendor requires AI-specific assessment. The criteria should consider data sensitivity, decision criticality, and regulatory context. A vendor processing PHI with AI requires enhanced assessment. A vendor whose AI makes or influences decisions about people requires enhanced assessment. A vendor supporting compliance-critical functions requires enhanced assessment.
For vendors requiring AI-specific assessment, create a supplemental questionnaire that addresses the topics covered earlier in this article: training data, model behavior, explainability, updates, and third-party dependencies. Make it a required step before contract approval.
Assign responsibility for AI vendor assessments to someone who understands both your industry's risk context and AI fundamentals. Your standard vendor management team may not have the expertise to evaluate AI-specific responses. This doesn't mean you need a dedicated AI risk team, but someone needs to develop the competency—possibly your information security team, your compliance team, or a cross-functional group.
Create decision criteria for acceptance. What level of explainability is sufficient for your use cases? What bias testing is adequate? What model update notice period is required? These decisions should be made before vendor evaluation, not during negotiation.
Ongoing AI Vendor Risk Management
The risk assessment doesn't end when the contract is signed. AI systems change in ways that traditional vendor services don't, and your monitoring needs to account for that.
Require regular performance reporting from AI vendors. The metrics should include accuracy, error rates, and any bias measurements relevant to your use case. The vendor should provide these reports quarterly at minimum, and they should be compared against the baseline established during vendor evaluation.
Monitor for model updates. Your vendor should notify you of updates per the contract terms, but don't rely solely on vendor notification. If the system's behavior changes—response times shift, output formats differ, error patterns change—investigate whether an update occurred. Test major updates before they reach production if your use case allows it.
Review incidents involving AI vendors differently than traditional vendor incidents. When an AI system produces an unexpected or problematic output, investigate the cause. Was it a model issue, a data quality issue, or a usage issue? Document the incident and the vendor's response. If similar incidents recur, that's a pattern that should trigger contract review or vendor replacement.
Conduct periodic reassessments. Your initial AI vendor assessment captured a point in time. The vendor's capabilities, processes, and risks change. Schedule reassessments annually or when significant changes occur: new model versions, changes in leadership or ownership, new regulatory requirements, or changes in your use case that increase risk.
What This Means for Your Organization
AI vendor risk isn't a future concern—it's present in most organizations already. If you're using Microsoft 365 Copilot, Salesforce Einstein, or any of dozens of other enterprise platforms with AI features, you have AI vendors. If your organization is evaluating dedicated AI solutions for customer service, document processing, or analytics, you'll have more.
Your vendor management program wasn't built for this risk profile, and adapting it requires deliberate effort. You need new questions, new contract terms, and new monitoring processes. You need expertise your team may not have today. You need to convince leadership that this isn't just another compliance checkbox but a material business risk.
The organizations that address AI vendor risk proactively will have a competitive advantage. They'll be able to adopt AI capabilities faster because they've built the governance infrastructure to do it safely. They'll avoid the incidents that result from blind adoption—the bias scandals, the data breaches, the compliance violations that come from treating AI vendors like traditional vendors.
The organizations that ignore AI vendor risk will learn about it through incidents. Some of those incidents will be minor embarrassments. Some will be regulatory enforcement actions. Some will be material business impacts that could have been prevented with proper vendor management.
Your existing vendor management program got you this far. It won't get you through the AI transition without adaptation. The adaptation doesn't require starting from scratch, but it does require acknowledging that AI vendors present different risks and building the capability to assess and manage those risks. That work starts with understanding what makes AI vendor risk different, asking the right questions, negotiating appropriate contract terms, and monitoring AI vendor performance over time.
The framework exists—the AI governance principles that apply to your internal AI use also apply to vendors. The challenge is execution: building the process, developing the expertise, and creating organizational accountability for AI vendor risk. That's not a technical problem or a compliance problem. It's a leadership problem, and it needs executive attention.