Commercial insurance groups in America are beginning to put up barriers around artificial intelligence risk, and have been drawing up language excluding AI-related liabilities from standard cover in the hope that state regulators will approve it, according to reporting by the Financial Times. That’s a pretty stunning admission for an offering built to price uncertainty: today’s AI, in their view, is uninsurable at scale.
Why Underwriters Are Tapping the Brakes on AI Risk
AI poses two problems that insurance companies abhor: opaqueness and correlation. Opacity, since big models can fail in inexplicable ways and leave a scant evidentiary trail. Correlation, because the same model or vendor stack often gets rolled out across thousands of companies and offers a potential for collective failure when something goes wrong.
- Why Underwriters Are Tapping the Brakes on AI Risk
- The Losses Are Real Now in Real-World AI Disputes
- From Silent Cyber to Silent AI in Insurance Policies
- Regulators and Standards Are Outlining the Next Step
- What Coverage Might Look Like for AI-Related Risks
- How Companies Can Remain Insurable Amid AI Adoption
One Aon executive put it to me starkly: Carriers can write a $1 billion payout; they can’t handle 10,000 $100M payouts from the same model update/jailbreak/agentic meltdown.
That is textbook systemic risk — the kind that transforms a niche mishap into a marketwide event.
The Losses Are Real Now in Real-World AI Disputes
There is also an increasing docket of AI-related disputes and fraud.
- Google’s AI Overview report was reportedly defamatory for a solar company in a $110 million suit.
- Air Canada had to honor a discount that was conjured up by its chatbot.
- Fraudsters at the engineering firm Arup mouthed along to a video call in which they cloned the voice and appearance of a senior executive and used it to approve a $25 million transfer.
Nothing in those cases required exotic science — just off-the-shelf tools combined with weak controls.
Insurers perceive exposure stretching from defamation to product liability to copyright infringement, privacy violations, employment discrimination, securities disclosure, and run-of-the-mill cybercrime made more acute by deepfakes. Even if relatively few incidents become public, the signal to underwriters is clear — and that’s not good news for borrowers.
From Silent Cyber to Silent AI in Insurance Policies
Carriers found “silent cyber” buried in traditional policies that were not priced for cyber risk a decade ago. The solution was exclusions, sub-limits, and purpose-built coverage. AI is tracking the same arc. If the new exclusions are formalized by regulators, businesses should expect a tightening of language over general liability, professional liability (E&O), directors and officers (D&O), and cyber forms.
That might include AI-specific limits of liability, coinsurance on model-related events, required controls (human-in-the-loop review, logging, and kill switches), and narrower definitions around what constitutes a covered “occurrence.” Reinsurers, whose capacity underwrites the market, will demand clarity; it has already been coaxed out of Lloyd’s of London in 2023 for cyber by requiring war exclusions to cap tail risk.
Regulators and Standards Are Outlining the Next Step
Changes in wording must be approved by state insurance regulators, which typically happens through the National Association of Insurance Commissioners. Look for pointed questions about consumer protection and clarity: does an exemption exclude only vendor models, or also in-house systems or a basic automation? How will carriers determine an AI-generated loss from a simple mistake?
Governance is the road to insurability. Carriers need repeatable controls which map to frameworks like the NIST AI Risk Management Framework or the ISO/IEC 42001 management system standard for AI. Model provenance documents, evaluations, red-teaming results, and audit logs — all the artifacts of compliance team output — double as underwriting evidence.
What Coverage Might Look Like for AI-Related Risks
In the short term, buyers could encounter carve-outs for model hallucinations, autonomous behaviors, or the misappropriation of training-data proprietary rights; sub-limits based on deepfake-facilitated social engineering; and warranties related to human oversight. Some carriers are also experimenting with parametric covers that pay out when a defined trigger — for instance, a vendor outage or model rollback — occurs, avoiding the thorny causation debates.
Longer term, the market might move more in the direction of pooled or renewable, government-supported capacity for AI catastrophe scenarios — along the lines of terrorism and flood programs. Swiss Re and a few other reinsurers have warned for years that systemic events like cyber require shared approaches; AI raises the same coordination problem, just with a different failure mode.
How Companies Can Remain Insurable Amid AI Adoption
Begin with an inventory: where the models are used, who supplies them, what data they touch, and where automated actions are permitted. Map the modes of failure to business impact. Make that policy — role-based access, human sign-off on risky shipments, immediate monitoring and response logging, a clear rollback plan for bad model updates. Procure in line with indemnities, service-level guarantees, and the right to audit third-party models.
Then bring your broker in early. Underwriters will ask you to produce red-team reports, bias and safety validation testing, content moderation filters, incident response drills that foresee AI misuse. Demonstrable controls won’t erase exclusions, but they can help maintain broader terms and pricing as the market hardens.
Insurers are perfectly clear: AI can be groundbreaking, but it won’t bankroll its worst-case scenarios until governance does better. Whether that position will push the industry toward safer deployments — or simply shove more risk to buyers — will be decided in regulatory hearing rooms, not on streets where cars with dubious capabilities are being shopped.