Your CTO walks into your office with a proposal to embed AI into customer support workflows. Your legal team is evaluating an AI-powered contract review tool. A vendor promises their AI will cut operational costs by 30%. These scenarios are playing out in boardrooms right now, and the executives I talk to are asking the same thing: how do we separate genuine capability from vendor hype, and how do we avoid becoming a cautionary tale?
The pattern I see across healthcare, defense contractors, and federal systems integrators is consistent: organizations rush to deploy AI because competitors are doing it, because vendors make it sound easy, or because they're afraid of falling behind. What's missing is a structured approach to asking the right questions before you commit resources and expose your organization to new categories of risk.
These aren't theoretical executive AI questions. They come from actual vendor evaluations, post-incident reviews, and the gap between what AI vendors promise and what compliance frameworks require. If you're evaluating an AI proposal—whether it's a vendor product or an internal development effort—these ten questions will help you make better decisions.
What Problem Are We Actually Solving?
Start here, not with the technology. I've watched organizations deploy AI solutions in search of a problem to solve. They end up with expensive tools that don't integrate with existing workflows, don't deliver measurable value, and create compliance obligations they weren't prepared to manage.
Ask your team to articulate the problem in operational terms. "We need AI" isn't a problem statement. "Our customer support team can't respond to 40% of inquiries within our contractual SLA" is a problem statement. "Manual contract review delays procurement by an average of 12 days" is a problem statement. The specificity matters because it gives you a baseline to measure against and helps you evaluate whether AI is actually the right solution.
In my experience, about half the time, the problem can be solved more effectively with process improvement, better training, or conventional automation. AI introduces complexity, ongoing costs, and regulatory risk. Make sure you need it before you take on those obligations.
What Data Will This System Process, and Do We Have Rights to Use It This Way?
This is where vendor pitches and regulatory reality collide. AI systems are trained on data, they process data during operation, and they often retain data for ongoing improvement. You need to know exactly what data the system will touch and whether your current agreements, consents, and authorizations cover that use.
The Healthcare Perspective
In healthcare, this gets complicated fast. If you're considering an AI tool that processes protected health information (PHI), you need a Business Associate Agreement with the vendor. But as I've written about in Do AI Vendors Need to Sign a BAA?, many AI vendors resist signing BAAs or their technical architecture makes HIPAA compliance impossible. They may be using your data to train models, storing it in shared infrastructure, or sending it to third-party APIs. None of that is compatible with HIPAA's requirements.
Ask the vendor specifically: Will you sign a HIPAA BAA? Where will our data be processed and stored? Will any of our data be used for model training? Who else has access to systems that process our data? If you get vague answers or pushback, walk away.
The Defense Contractor Perspective
For defense contractors and federal systems integrators, the question is whether the AI system will process Controlled Unclassified Information (CUI) or technical data subject to export controls. If it will, you need to know whether the vendor's infrastructure meets NIST SP 800-171 requirements, whether their personnel have appropriate clearances, and whether data will remain within CONUS. Most commercial AI vendors cannot meet these requirements. Their platforms are designed for scale and efficiency, not for the compartmentalization and access controls that CUI demands.
I've seen contractors propose using ChatGPT or similar tools to draft technical documentation. The moment CUI goes into that system, you have a potential ITAR or CMMC violation. The vendor's terms of service probably give them broad rights to use your inputs, and you have no meaningful control over where that data goes or who accesses it.
How Does This System Make Decisions, and Can We Explain Them?
You don't need to understand transformer architectures or gradient descent to ask this question. What you need to know is whether the system's decision-making process can be explained in terms that satisfy your legal, compliance, and business requirements.
If the AI denies a claim, flags a transaction as fraudulent, or recommends hiring one candidate over another, can you explain why it made that decision? This isn't just about fairness or ethics—though those matter—it's about legal defensibility and operational control. Regulatory frameworks increasingly require explanations for automated decisions, especially in healthcare, finance, and employment contexts. The EU AI Act mandates transparency for high-risk systems. Even where regulation doesn't require it, your ability to audit and challenge the system's decisions is a basic operational necessity.
Ask the vendor: Can we audit the factors that led to a specific decision? Can we identify and correct errors in the model's logic? If the system makes a decision we disagree with, what recourse do we have? If the answer is "the model is proprietary" or "it's too complex to explain," you're buying a black box. That might be acceptable for low-stakes applications, but not for anything that affects legal rights, regulatory compliance, or business-critical decisions.
What Are the Failure Modes, and What Happens When It Gets It Wrong?
AI systems fail differently than traditional software. They don't just crash or throw error messages. They hallucinate, produce plausible-sounding but incorrect outputs, and exhibit bias that reflects patterns in training data. Your executive AI questions need to address what happens when the system fails and what safeguards exist to detect and mitigate those failures.
I ask vendors and internal teams to describe the failure modes they've observed or anticipate. What does the system do when it encounters data it wasn't trained on? How does it handle edge cases? What's the false positive and false negative rate, and what's the business impact of each? If the system is generating text, how do you detect when it's confabulating information?
For any AI deployment, you need human review for high-stakes decisions. That review needs to be meaningful, not rubber-stamping. I've seen organizations deploy AI with a "human in the loop" that consists of a junior employee glancing at the AI's output for three seconds before approving it. That's not oversight, it's liability theater. The reviewer needs sufficient context, expertise, and time to actually evaluate the AI's output.
Speaking on AI Governance and Risk Management
Carl delivers practical, CISO-level keynotes on AI governance, regulatory compliance, and managing emerging technology risk. If your leadership team is navigating AI deployment decisions, Carl brings real-world patterns and actionable frameworks—no vendor pitches, no buzzwords.
Book Carl to Speak
Who Is Liable When Things Go Wrong?
Read your vendor contract carefully. Most AI vendors include broad disclaimers of liability. They'll warrant that the software will perform "in substantial conformance" with documentation, but they disclaim warranties of fitness for a particular purpose, and they cap liability at the fees you've paid. If their AI makes a decision that results in a regulatory fine, a lawsuit, or a data breach, the vendor's exposure is typically limited to refunding your subscription fee. Your exposure is unlimited.
This isn't unique to AI, but the stakes are different. A CRM system that crashes costs you productivity. An AI system that exposes patient data or makes discriminatory decisions costs you fines, lawsuits, and reputation damage. You need to understand who bears that risk and whether your insurance covers AI-related incidents. Most cyber liability policies were written before generative AI became ubiquitous, and they may not cover the risks you're actually taking on.
Ask your legal team to review the vendor's indemnification provisions and liability caps. Ask your insurance broker whether your current policies cover AI-related claims. And ask yourself whether you're comfortable bearing the residual risk. If you're not, either negotiate better terms or don't deploy the system.
What Regulatory Obligations Does This Create or Modify?
AI deployment changes your regulatory posture. Depending on what the system does and what data it processes, you may trigger new obligations under HIPAA, GDPR, CCPA, the EU AI Act, or sector-specific regulations. You need to map those obligations before you deploy, not after.
For healthcare organizations, AI that processes PHI makes you subject to HIPAA's requirements for access controls, audit logs, breach notification, and more. If you're using AI to make treatment decisions or prior authorization determinations, you may trigger additional requirements under state laws and payer contracts.
For organizations in the defense industrial base, AI systems that process CUI must meet the same NIST SP 800-171 controls as any other system. That means multifactor authentication, encryption, incident response, and all the rest. If the AI vendor can't meet those controls—and most can't—you can't use their system for CUI. For more on the baseline requirements, see What Is Regulatory Compliance? A Practical Guide for Leaders.
Under the EU AI Act, certain AI systems are classified as high-risk and subject to specific requirements for risk management, data governance, transparency, and human oversight. If you're deploying AI in the EU or offering AI-enabled products to EU customers, you need to understand where your system falls in that risk taxonomy and what compliance obligations apply.
The pattern I see is that organizations evaluate AI tools from a feature perspective—does it do what we need?—without assessing the regulatory implications. That's backwards. Regulatory compliance is a hard constraint. If you can't meet the requirements, the features don't matter.
Do We Have the Internal Capability to Govern This System Over Time?
AI governance isn't a one-time decision, it's an ongoing operational discipline. Models drift. Training data becomes stale. Business requirements change. Regulatory frameworks evolve. You need people, processes, and tools to monitor the system's performance, assess its continued fitness for purpose, and make adjustments as needed.
Ask yourself: Do we have someone responsible for monitoring this system's outputs for accuracy, bias, and compliance? Do we have a process for retraining or updating the model when performance degrades? Do we have audit logs that let us trace decisions back to specific inputs and model versions? Do we have a process for responding to regulatory guidance or enforcement actions related to AI?
Most organizations don't have an AI governance function when they deploy their first AI system. That's fine, but you need to build one. I recommend starting with a cross-functional working group that includes legal, compliance, IT, and the business units using the AI. Give them a clear mandate: define acceptable use policies, establish risk assessment criteria, monitor deployments, and escalate issues that require executive attention. For a more detailed framework, see What Is AI Governance? A Framework for Organizations Deploying AI.
AI governance is not a technology problem; it's a leadership and accountability problem. You need named owners and clear processes, or the system will drift into risk you didn't intend to take.
What Is the Vendor's Track Record With Security and Compliance?
AI vendors are not all created equal. Some have mature security programs, transparent practices, and a track record of working with regulated industries. Others are startups that move fast, break things, and treat security as an afterthought. You need to know which you're dealing with.
Ask for the vendor's SOC 2 report. If they don't have one, that's a red flag for any system processing sensitive data. Ask about their incident response history: have they had breaches, and how did they handle them? Ask about their vulnerability management process and how they handle security patches. Ask whether they've worked with organizations in your industry and whether they understand your regulatory requirements.
For defense contractors, ask whether the vendor has experience with CMMC, NIST SP 800-171, or DFARS compliance. Most don't. Commercial SaaS vendors are used to operating in environments where "we encrypt data in transit and at rest" is sufficient. That doesn't meet the bar for CUI. If the vendor can't articulate how they meet specific NIST controls, they're not ready for your use case.
I've had vendors tell me "we're working toward CMMC compliance" or "we can set up a dedicated instance for your data." Those are not the same as "we are compliant." Don't accept promises; require evidence.
What's Our Exit Strategy?
Before you integrate an AI system into critical workflows, know how you'll extricate yourself if things go wrong. Can you export your data in a usable format? Can you move to a different vendor or bring the capability in-house? What's the business continuity plan if the vendor goes out of business, gets acquired, or discontinues the product?
This is particularly important for generative AI tools. If you've built business processes around a specific model or vendor, and that vendor changes pricing, restricts access, or shuts down, you need a fallback. I've seen organizations build entire workflows around tools that were later deprecated or became prohibitively expensive. They had no exit strategy, and they ended up locked in.
Ask for data portability guarantees in your contract. Ask for documentation of the system's APIs and integration points. Ask what happens to your data if you terminate the contract. And maintain in-house expertise so you're not entirely dependent on the vendor's support.
Bring Practical AI Risk Guidance to Your Leadership Team
Carl's keynotes on AI governance, regulatory risk, and emerging technology help executives ask better questions and make better decisions. See all keynote speaking topics or reach out about your event.
Book Carl for Your EventAre We Prepared for the Scrutiny This Will Invite?
AI deployments attract attention. Regulators are focused on AI. Plaintiffs' attorneys are looking for test cases. The media is primed to cover AI failures. Your competitors and customers are watching. If you deploy AI in a customer-facing or high-stakes context, assume it will be scrutinized, and ask whether you're prepared to defend your decisions.
That scrutiny takes several forms. Regulatory inquiries: if you're in a regulated industry, expect that auditors and regulators will ask about your AI systems, how they work, and what safeguards you have in place. Litigation: if your AI makes a decision that harms someone, expect that decision to be challenged in court. Public relations: if your AI fails in a visible or embarrassing way, expect that you'll need to explain it publicly.
The best defense is documentation. Document your risk assessment. Document the business case for deploying the system. Document the safeguards you've put in place. Document the oversight and monitoring processes. If you can show that you took AI governance seriously, that you assessed the risks, and that you put reasonable controls in place, you're in a much stronger position than if you just deployed the system and hoped for the best.
The executive AI questions you ask before deployment become the evidence you rely on after something goes wrong. Take them seriously.
Putting It Into Practice
These questions aren't meant to kill AI initiatives. They're meant to surface risks and responsibilities before you commit resources and expose your organization to liabilities you didn't anticipate. The organizations that succeed with AI are the ones that treat it as a governance and risk management challenge, not just a technology opportunity.
In my work with healthcare organizations, defense contractors, and federal systems integrators, the pattern is clear: the organizations that ask hard questions up front avoid expensive mistakes later. They negotiate better vendor terms. They deploy systems that actually solve business problems. They build governance structures that let them scale AI responsibly. And they avoid becoming the cautionary tale that other executives reference when they're evaluating their own AI proposals.
You don't need to be an AI expert to ask these questions. You need to be a responsible executive who understands that deploying new technology creates new risks, and that managing those risks is a leadership responsibility, not a technical one. The vendors and internal teams who can't answer these questions aren't ready to deploy AI in your environment. The ones who can are the partners you want.
Your job as an executive isn't to understand the math behind large language models or the architecture of neural networks. Your job is to understand the business value, the risks, the regulatory obligations, and the governance structure. These ten questions give you a framework for that understanding. Use them.