Why Carl for Your Audience on AI Governance
AI governance is the rare topic where every leadership team knows they need a position and almost no one is sure what that position should be. The regulatory picture is fragmenting faster than internal policy can keep up — sectoral guidance from HHS, evolving FTC enforcement signals, state-level AI laws, the EU AI Act influencing US-headquartered companies with European footprints, and a patchwork of disclosure requirements that vary by industry and jurisdiction. Most organizations have published an AI usage policy and called the work done. It isn't.
Carl B. Johnson sits at the intersection of AI, privacy, cybersecurity, and compliance — the four disciplines that AI governance actually requires to be coordinated. As CISO at Cleared Systems, he advises organizations on AI governance frameworks that account for regulatory uncertainty rather than ignoring it, and his work on HIPAA-and-AI in healthcare has been a leading edge of how AI compliance is being practiced today.
For boards, executive offsites, technology conferences, and corporate leadership programs, Carl delivers AI governance content that's grounded in current practice rather than thought-leadership abstraction. The audience leaves with a working understanding of the regulatory trajectory, the specific governance gaps most organizations have, and the decision framework leadership can use to actually move forward.
Available Sessions on AI Governance
The Future of Compliance: AI, Privacy, Cybersecurity, and Regulation
Where the convergence of AI, privacy, and regulation is heading and how organizations can prepare today for what is coming. Covers the regulatory trajectory across jurisdictions, the practical governance patterns emerging from organizations that are getting this right, the specific AI risk categories that boards need to track, and the "no-regrets" governance moves leaders can make this quarter even amid regulatory uncertainty.
AI Governance for the Boardroom
Focused briefing built specifically for board-level audiences. Translates the AI governance landscape into the questions directors should be asking management, the oversight patterns that distinguish boards getting this right, and the disclosure-and-accountability framework that limits liability exposure as the regulatory picture sharpens.
Building an AI Governance Framework That Holds Up
Hands-on session for executives and senior leaders responsible for actually building AI governance — chief privacy officers, chief compliance officers, CISOs, general counsel, and heads of risk. Walks through framework architecture, the cross-functional accountability model that actually works, AI use-case classification and risk-tiering, vendor and third-party AI governance, and the documentation patterns that demonstrate program maturity to regulators.
Download the One-Sheet
Get a printable, shareable PDF of this topic — perfect for circulating to your event committee or program chair. Includes the same sessions, audience profile, and FAQs as this page in a 2-page format.
Who This Is For
Audiences responsible for setting AI governance direction — or the technology and risk audiences that need to translate that direction into actual practice.
- Corporate boards and audit committees
- Executive leadership offsites
- Technology and innovation conferences
- Privacy and compliance summits
- CISO and security leadership events
- Industry-specific AI conferences
- General counsel and legal-leadership programs
- Risk management association events
What Audiences Walk Away With
- A clear-eyed view of where AI regulation is actually heading across US and international jurisdictions
- The "no-regrets" governance moves any organization can make right now, regardless of how the regulatory picture sharpens
- A practical framework for tiering AI use cases by risk and regulatory exposure
- The specific governance gaps most organizations have but haven't seen yet
- The cross-functional accountability model that distinguishes effective AI governance from policy-on-paper
- Board-level vocabulary for discussing AI risk that translates to actual oversight