AI in Insurance: What Underwriters Need to Know About Compliance
Insurance was one of the first industries to use predictive models at scale. Credit scoring for auto insurance, mortality models for life insurance, catastrophe modeling for property coverage — actuarial science has always been about using data to price risk. AI takes that further, faster, and with less transparency.
That's exactly what regulators are worried about. When an algorithm denies someone homeowner's insurance or prices them out of health coverage, the consequences are immediate and personal. And unlike a traditional actuarial model that a regulator can audit in a spreadsheet, a machine learning model that ingests hundreds of variables is a different kind of challenge entirely.
The regulatory response is accelerating. If you're an insurer using AI — or a vendor selling AI tools to insurers — you need to understand what's already in effect and what's coming.
The NAIC Model Bulletin
The National Association of Insurance Commissioners issued its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (opens in new tab) in December 2023. It's not a law — it's guidance that state insurance commissioners can adopt. But it sets the baseline expectation for the industry.
The bulletin's core message: insurers are responsible for the outputs of their AI systems, including systems built or operated by third-party vendors. You cannot outsource accountability. If a vendor's model produces unfairly discriminatory outcomes, the insurer deploying it bears the regulatory risk.
Key requirements in the bulletin include maintaining an AI governance framework, conducting ongoing testing for unfair discrimination, documenting how AI systems are used in insurance decisions, and ensuring human oversight of AI-driven outcomes. The bulletin applies to all lines of insurance and covers the full lifecycle from underwriting to claims to marketing.
Multiple states have already adopted or are in the process of adopting the bulletin. Even in states that haven't formally adopted it, commissioners are using it as a reference point for examinations and enforcement.
Colorado's Insurance AI Provisions
Colorado has gone further than any other state. SB 24-205 includes specific provisions for insurance that go beyond the general AI governance requirements.
The law prohibits insurers from using AI in ways that result in unfair discrimination based on protected characteristics — race, color, national origin, sex, sexual orientation, disability, and others. Critically, it targets not just intentional discrimination but disparate impact: if your model produces discriminatory outcomes, you're on the hook even if no one designed it that way.
Insurers must conduct impact assessments for AI systems used in consequential decisions. For insurance, that means underwriting, pricing, claims processing, and fraud detection. The assessment must evaluate whether the system produces outcomes that disproportionately affect protected classes.
Colorado also requires insurers to provide notice to consumers when AI is used in decisions that affect them and to offer a mechanism to appeal adverse decisions. This is operationally significant — it means you need to be able to explain, at least at a high level, why the AI system reached the decision it did.
The Proxy Variable Problem
The central compliance challenge in insurance AI is proxy discrimination. Even if you remove protected characteristics like race and gender from your model's inputs, the model may reconstruct those categories through proxy variables. ZIP code correlates with race. Credit history correlates with income, which correlates with race and gender. Vehicle type, shopping behavior, web browsing patterns — all of these can serve as proxies.
Traditional actuarial models used relatively few variables, making proxy effects easier to identify. Modern machine learning models can ingest thousands of features and find correlations that human actuaries would never spot — including correlations that effectively encode protected characteristics.
This is not a theoretical concern. In 2023, the Consumer Financial Protection Bureau highlighted that AI-driven pricing models in lending were producing racially disparate outcomes through proxy effects. The same dynamics apply in insurance. A model that uses browsing behavior to price auto insurance may be building a proxy for age, income, and neighborhood — all of which correlate with race.
Regulators are increasingly sophisticated about this. "We removed race from the inputs" is not a sufficient answer. You need to demonstrate that your model's outputs don't disproportionately burden protected groups, not just that its inputs don't include protected characteristics directly.
Claims Processing and Fraud Detection
Underwriting and pricing get the most regulatory attention, but AI in claims processing raises its own compliance issues.
AI-powered claims triage — systems that automatically route, prioritize, or make initial coverage determinations — must comply with the same fairness requirements as underwriting models. If your claims AI systematically undervalues claims from certain geographic areas or demographic groups, that's unfair claims settlement practice under existing insurance law, even before AI-specific regulations enter the picture.
Fraud detection is especially sensitive. AI fraud models that disproportionately flag claims from particular communities create both legal risk and reputational harm. A model trained on historical fraud data will learn whatever patterns exist in that data — including patterns that reflect biased investigation practices rather than actual fraud prevalence.
Several state attorneys general have opened investigations into AI-driven claims denials, particularly in health insurance. The pattern is consistent: an AI system automatically denies or delays claims, the insurer fails to provide adequate human review, and policyholders suffer harm. These investigations are producing enforcement actions and consent orders that set expectations for the entire industry.
What Insurers Should Audit Now
If you're an insurer using AI in any part of your operations, here's where to focus your compliance efforts today.
- ●Inventory every AI system. Include vendor tools, embedded features in your policy administration system, and any models your actuarial team built in-house. You can't govern what you don't know about.
- ●Test for disparate impact across protected classes. Run your models against demographic data to identify whether outcomes differ significantly by race, gender, age, or other protected characteristics. Don't wait for a regulator to do this analysis for you.
- ●Review your vendor contracts. Under the NAIC model bulletin, you're responsible for vendor AI outputs. Ensure your contracts include audit rights, transparency requirements, and bias testing obligations.
- ●Document your governance framework. Have a written AI governance policy that covers model development, validation, deployment, monitoring, and retirement. Regulators will ask for this during examinations.
- ●Build explanation capabilities. If a consumer asks why they received a particular rate or why a claim was denied, you need to be able to provide a meaningful answer. "The algorithm decided" is not sufficient.
- ●Establish human review processes for AI-driven decisions that adversely affect policyholders. Automated denials and adverse underwriting decisions should have a human-in-the-loop before they become final.
- ●Monitor state legislative developments. At least fifteen states have AI-related insurance bills pending. The regulatory landscape will look different in twelve months than it does today.
The EU Dimension
For insurers operating in Europe, the EU AI Act adds another layer. Insurance pricing and underwriting AI systems likely qualify as high-risk under Annex III, Category 5 — systems that evaluate creditworthiness or determine access to essential services. This means full compliance with the Act's requirements for risk management, data governance, documentation, human oversight, and accuracy.
The interaction between the EU AI Act and existing insurance regulation (Solvency II, Insurance Distribution Directive) is still being worked out by regulators. But the direction is clear: AI in insurance will face sector-specific application of horizontal AI rules, on top of the insurance-specific requirements that already exist.
Insurers that build their compliance programs now, based on the highest common requirements across jurisdictions, will be far better positioned than those that wait for each regulation to take effect and scramble to catch up.
Key Takeaways
- ●The NAIC model bulletin makes insurers responsible for AI outputs, including those from vendor-built systems. You cannot outsource accountability.
- ●Colorado's law targets disparate impact, not just intentional discrimination. Removing protected characteristics from model inputs is not sufficient — you must test outputs.
- ●Claims processing and fraud detection AI face the same fairness requirements as underwriting models, and state AG investigations are already producing enforcement actions.
- ●Start with an inventory of every AI system, test for disparate impact, and build explanation capabilities for consumer-facing decisions.
Related Regulations
Existing federal regulators are actively applying longstanding laws to AI systems in healthcare, financial services, employment, insurance, and education. You don’t need a new AI-specific statute to face AI regulation—the rules are already on the books.
With no comprehensive federal AI law in place, U.S. states are writing their own rules. The result is a fast-moving patchwork of requirements covering hiring, insurance, housing, and more.
Sources & References
Disclaimer: Content on AIRegReady is educational and does not constitute legal advice. Regulatory summaries are simplified for clarity and may not capture every nuance of the underlying law or guidance. Consult qualified legal counsel for specific compliance obligations. Information was accurate as of the date noted but regulations change frequently.