Industry Insights

Your Insurance May Not Cover AI Agent Failures: The D&O Exclusion Crisis

Major carriers now file AI-specific exclusions in D&O policies. 88% deploy AI but only 25% have board governance. Here's what executives must do before their next renewal.

JS
Jashan Singh
Founder, beeeowl|April 5, 2026|8 min read
Your Insurance May Not Cover AI Agent Failures: The D&O Exclusion Crisis
TL;DR Between January 2025 and January 2026, major carriers (AIG, W.R. Berkley, Great American) shifted from silent AI coverage to explicit D&O exclusions. W.R. Berkley's absolute exclusion eliminates coverage for any claim arising from AI use or deployment. 88% of organizations deploy AI but only 25% have board-level governance. WTW warns AI advancing faster than SOX governance creates D&O exposure. Documented governance frameworks, audit trails, and security hardening are now coverage prerequisites.

Is Your D&O Policy Silently Excluding AI Agent Failures?

It probably is. Between January 2025 and January 2026, major carriers including AIG, W.R. Berkley, and Great American shifted from treating AI as silently covered under general technology risk to filing explicit AI-specific exclusions in D&O, E&O, and fiduciary liability policies. If you haven’t reviewed your policy language since your last renewal, you may be operating AI agents with zero coverage for the losses they can create.

Your Insurance May Not Cover AI Agent Failures: The D&O Exclusion Crisis

I’ve worked with over 60 executives deploying private AI infrastructure across the US and Canada. In the last three months, insurance has become the conversation that derails every timeline. Not security. Not compliance. Insurance — because executives are discovering mid-deployment that their existing coverage won’t protect them if something goes wrong.

WTW’s March 2026 analysis on Sarbanes-Oxley and AI governance made the situation explicit: AI is advancing faster than the governance frameworks designed to manage it, and that gap creates direct D&O exposure. The carriers read the same reports. They’re responding by excluding what they can’t quantify.

What Do the New AI Exclusions Actually Say?

The language varies by carrier, but W.R. Berkley’s absolute exclusion is the clearest example of where the market is heading. Their filing eliminates coverage for any claim “based upon, arising out of, or attributable to” the use, deployment, or development of artificial intelligence. That’s not a carve-out for edge cases. That’s a blanket removal of coverage for anything touching AI.

Swept AI’s analysis of the exclusion landscape documents three tiers that carriers are adopting in 2026:

Absolute exclusions remove all coverage for AI-related claims regardless of circumstances. W.R. Berkley’s approach falls here. No governance framework, no audit trail, no amount of security hardening will restore coverage under an absolute exclusion — you need a different carrier or a separate policy.

Conditional exclusions remove coverage unless the policyholder can demonstrate specific AI governance controls. This is where AIG and several Lloyd’s syndicates have landed. You’re covered if you can prove documented governance, audit trails, and human oversight. You’re excluded if you can’t.

Sub-limited coverage maintains AI coverage but caps it at a fraction of the overall policy limit. Great American’s approach caps AI-related claims at 10-15% of the aggregate limit. Your $10 million D&O policy might only cover $1 million in AI losses.

Each tier creates a different strategic challenge. But the common thread is clear: silent coverage is over. Your carrier has taken a position on AI risk. The only question is whether you know what that position is.

Why Are Carriers Excluding AI Now?

Carriers aren’t excluding AI out of speculation — they’re responding to actual loss data and a governance gap they can measure. Techne Insights’s 2026 analysis documented the core statistic driving underwriting decisions: 88% of organizations now deploy AI systems, but only 25% have board-level AI governance policies.

That’s the number that matters to your underwriter. It tells them that the vast majority of policyholders deploying AI have no formal framework for managing the risk those systems create. From an actuarial perspective, that’s an unquantifiable exposure — and carriers don’t cover what they can’t price.

The governance gap isn’t abstract. We covered the broader control problem in our piece on AI agent governance for executives, but the insurance angle makes it concrete. Your board approved an AI initiative. Your team deployed agents connected to email, CRM, and financial systems. But nobody documented what the agents are authorized to do, how decisions get escalated, or what happens when something goes wrong.

Insurance Business Magazine’s professional risks report for 2026 found that claims frequency for AI-related D&O actions increased 260% year-over-year. The median settlement for AI governance failures — cases where a board failed to establish adequate AI oversight — reached $4.2 million. Carriers are pricing that reality into your renewal.

How Does Sarbanes-Oxley Apply to Your AI Agents?

If your AI agent touches financial data, it falls within SOX scope. WTW’s March 2026 analysis makes the connection that most executives haven’t drawn yet: AI systems that process, analyze, or generate financial information are subject to the same internal control requirements as any other system within your financial reporting infrastructure.

This isn’t a stretch of the regulation. SOX Section 404 requires management to assess the effectiveness of internal controls over financial reporting. An AI agent that generates variance commentary, models cash flow scenarios, or drafts financial disclosures is participating in your financial reporting process. If that agent hallucinates a number, misclassifies a transaction, or applies the wrong methodology, it’s a control failure with SOX implications.

WTW’s specific warning: “The pace of AI adoption has outstripped the development of governance frameworks comparable to those established for financial reporting under Sarbanes-Oxley. This gap creates oversight and control deficiencies with direct implications for D&O liability.”

We’ve seen this firsthand. CFOs who deployed AI agents for variance commentary and cash flow modeling without updating their SOX control documentation are now scrambling to retrofit governance before their next audit cycle. The agent works beautifully — but the control framework doesn’t account for it, and the auditor is asking questions.

The connection between SOX and D&O insurance is direct. A SOX control deficiency related to AI creates a material weakness disclosure. A material weakness disclosure triggers D&O scrutiny from regulators and shareholders. And if your D&O policy now excludes AI-related claims, that scrutiny lands on you personally with no insurance backstop.

What Does a Governance Framework That Satisfies Insurers Look Like?

Insurers operating under conditional exclusions have published — or will publish at renewal — specific control requirements for restoring AI coverage. The requirements converge around five areas, and they map directly to what we build into every deployment.

Documented AI Governance Policy

Your board needs a written AI governance framework that defines approved use cases, prohibited actions, escalation procedures, and accountability chains. This isn’t a 50-page policy manual nobody reads. It’s a working document that answers: what AI systems do we operate, who authorized them, what can they do, and who’s responsible when they act?

According to WTW, organizations with documented board-level AI governance policies receive 15-30% more favorable D&O terms at renewal compared to those without. That’s not a compliance exercise — it’s a direct financial return.

Comprehensive Audit Trails

Every action your AI agent takes must be logged immutably — what it did, when, what data it accessed, what decision it made, and what the outcome was. The log must be stored separately from the agent’s runtime environment so it can’t be modified by the agent itself.

This is the control that makes everything else verifiable. Without an audit trail, your governance policy is aspirational. With one, it’s demonstrable. We covered the implementation specifics in our audit logging and monitoring guide.

Human-in-the-Loop Controls

Certain categories of agent action require human approval before execution. At minimum: external communications, financial transactions above a defined threshold, data exports, and any action affecting regulated processes. The approval mechanism must create a record — a Slack message that nobody acknowledged doesn’t count as human oversight.

Insurers are specific about this. “Meaningful human oversight” means the agent pauses, presents the proposed action with context, and waits for explicit approval. Notification-only models don’t satisfy the control requirement. We built approval gates into OpenClaw for exactly this reason.

Security Hardening

Container isolation, credential separation, and network segmentation aren’t just security best practices anymore — they’re insurance prerequisites. An agent running with host-level access and hardcoded API keys represents an uncontrollable risk surface that no underwriter will accept.

Docker sandboxing ensures the agent physically can’t access systems outside its defined scope. Composio OAuth separation means the agent never holds raw credentials — it operates within pre-authorized scopes that can be revoked instantly. Firewall rules limit network access to explicitly approved endpoints. We covered the full architecture in our security hardening checklist.

Regular Risk Assessments

Insurers want evidence that you’re actively monitoring AI risk, not just deploying controls once and forgetting them. Quarterly risk assessments that evaluate agent behavior, review audit logs for anomalies, and update governance policies based on operational experience demonstrate the ongoing diligence carriers require.

What Should You Do Before Your Next Renewal?

Your renewal is the deadline. Everything you do between now and then determines whether you get coverage, exclusions, or somewhere in between. Here’s the practical sequence.

Pull your current policy language. Call your broker this week and ask for the specific AI-related provisions in your D&O, E&O, and cyber liability policies. Get the actual exclusion language, not a summary. If your broker can’t produce it within 48 hours, that tells you something about how closely they’re tracking this market.

Inventory your AI deployments. Document every AI agent and system operating in your organization — what it does, what data it accesses, what actions it can take, and what governance controls exist. If you can’t produce this inventory, you don’t have governance. We covered what a proper inventory looks like in our post on who owns AI strategy in the C-suite.

Build the governance framework insurers want to see. Documented policy, audit trails, human-in-the-loop controls, security hardening, and regular assessments. This isn’t a six-month project. For a single-agent deployment with the right infrastructure, it’s a one-day job — because the controls are built into the deployment itself.

Present the framework at renewal. Walk your underwriter through the specific controls you’ve implemented. Show them the audit logs. Demonstrate the approval gates. Provide the security hardening documentation. The difference between an exclusion and a conditional reinstatement often comes down to whether you can show verifiable controls.

The carriers have made their position clear. Silent coverage is gone. The organizations that maintain favorable terms are the ones that can demonstrate governance — not promise it.

How Does Private Infrastructure Change the Insurance Conversation?

Private deployment creates a fundamentally different conversation with your underwriter because it gives you provable control over every variable they evaluate. When your AI agent runs on infrastructure you own — whether that’s a Mac Mini on your desk or a hosted VPS you control — you can produce the evidence chain insurers require without depending on a third party.

The audit trail lives on your hardware. The access logs are yours. The container isolation, credential separation, and firewall configuration are verifiable by your team, your auditor, and your underwriter. You’re not presenting a vendor’s compliance certificate. You’re presenting controls you own and operate.

Every beeeowl deployment — from the $2,000 hosted setup to the $6,000 MacBook Air package — includes the controls that carriers now require as prerequisites for AI coverage: Docker container isolation, Composio OAuth credential separation, approval gates for high-stakes actions, comprehensive audit logging, and documented security hardening. We don’t build these as add-ons because they aren’t optional anymore. They’re the difference between coverage and exclusion.

The insurance market has decided that ungoverned AI is an uninsurable risk. The governance frameworks and infrastructure controls that satisfy underwriters are the same ones that protect your organization. There’s no conflict between doing this right and doing it affordably.

Request your deployment and we’ll have you running with full governance controls — the ones your insurer now requires — in one day.

Ready to deploy private AI?

Get OpenClaw configured, hardened, and shipped to your door — operational in under a week.

Related Articles

"AI Brain Fry" Is Real: Why Executives Need Agents, Not More AI Tools
Industry Insights

"AI Brain Fry" Is Real: Why Executives Need Agents, Not More AI Tools

A BCG study of 1,488 workers found that a third AI tool decreases productivity. Here's why one autonomous agent beats five AI tools for executive performance.

JS
Jashan Singh
Apr 5, 20268 min read
The LiteLLM Supply Chain Attack: What Every AI Deployer Must Learn
Industry Insights

The LiteLLM Supply Chain Attack: What Every AI Deployer Must Learn

A backdoored LiteLLM package on PyPI compromised 40K+ downloads and exfiltrated AWS/GCP/Azure tokens. Here's what went wrong and how to protect your AI deployment.

JS
Jashan Singh
Apr 5, 20268 min read
Prompt Injection: The #1 Threat to Your AI Agent (And How to Defend Against It)
Industry Insights

Prompt Injection: The #1 Threat to Your AI Agent (And How to Defend Against It)

OWASP ranks prompt injection as the #1 LLM vulnerability. A peer-reviewed defense achieves 0% attack success. Here's what executives need to know.

JS
Jashan Singh
Apr 5, 20268 min read
beeeowl
Private AI infrastructure for executives.

© 2026 beeeowl. All rights reserved.

Made with ❤️ in Canada