AI governance in palm of man's hand
Article

The regulatory implications of AI and ML for the insurance industry

In recent years, the insurance industry has witnessed a profound transformation driven by the exponential growth of artificial intelligence (AI) and machine learning (ML) technologies. While these advancements have ushered in unprecedented opportunities for insurers, regulators are faced with the task of ensuring they have a structured AI governance program in place to ensure these systems comply with incoming regulations.

Before embarking on any AI initiatives, it’s crucial to have a comprehensive understanding of the AI technology landscape, a strong grasp on the business uses, risks and rewards of leveraging these technologies and a robust data governance framework in place to govern these systems and manage risk as your AI systems continue to grow. These, in addition to understanding the incoming AI regulatory implications is essential for the insurance sector to harness the full potential of AI and ML while ensuring responsible and compliant deployment of these transformative technologies.

AI regulations and guidance

In 2020, the National Association of Insurance Commissioners (NAIC) introduced the AI Principles – often referred to as FACTS[1] – to serve as a framework to encourage the ethical and responsible adoption of AI technologies within the insurance industry. These principles aim to balance innovation with consumer protection, fairness and regulatory compliance and are as follows:

  • Fair and Ethical: Insurers should strive for fairness in decision-making processes, ensuring that AI algorithms do not discriminate against individuals based on protected characteristics such as race, gender, ethnicity or age
  • Accountable: Insurers are encouraged to have mechanisms in place to trace how AI systems arrive at decisions and take responsibility for their outcomes
  • Compliant: Insurers should ensure that their AI systems comply with legal requirements and ethical guidelines governing the insurance industry
  • Transparent: Insurers should strive to make AI processes understandable and explainable to consumers and regulators, providing clear information on the factors influencing decisions made by these systems
  • Safe, Secure and Robust: Insurers should implement robust security measures to protect sensitive information and maintain the integrity of AI technologies used in insurance operations

While these principles are not law and therefore not enforceable, they do set out the regulators’ expectations and will form the basis for future regulatory workstreams.

Updated NAIC Model Bulletin on AI

In 2023, the NAIC released a Model Bulletin on the “Use of Algorithms, Predictive Models and Artificial Intelligence Systems by Insurers”[2] to address the increasing integration of AI in insurance operations, particularly in areas such as underwriting, claims processing, risk assessment and customer interactions. Insurance companies are encouraged to create, execute and maintain a written AI Systems (AIS) program to ensure that decisions are accurate and adhere to both unfair trade practice laws and other relevant legal criteria.

While the bulletin provides additional in-depth details, the following provides a summarized view of the what organizations should have in-place regarding their AI systems:

General guidelines

Your organization’s AIS program should be designed to mitigate risk to ensure that AI systems don’t lead to arbitrary, discriminatory, or unfair trade practice-violating decisions affecting consumers. Get started by ensuring your AIS program has the following:

  • Built in program governance, risk management controls and internal audits
  • An AI strategy that is managed by Sr. Management (approved by the Board) and that governs development, monitoring and continued oversight
  • A customized approach based on AI use, stand-alone or integrated with risk management
  • External frameworks that align with existing practices
  • AI Systems that are encompassed throughout product life cycle, addressing all development phases, including third-party systems, for comprehensive coverage

Governance

Your organization’s AIS program should prioritize transparent, fair and accountable AIS design within existing or new governance structures and cover standards, life cycle policies and compliance documentation. Get started by ensuring your AIS program has the following:

  • Defined roles and responsibilities across the AI program and each life cycle stage; centralized/federated representation
  • Detailed qualifications, responsibilities, communication and hierarchy; ensure independence of decision makers, and set escalation protocols
  • Implemented protocols for monitoring, auditing and reporting
  • Descriptions of processes from design to monitoring, error detection and addressing of discriminatory outcomes

Risk management and internal control

Your organization’s AIS program should have a documented risk identification and control framework across all life cycles/stages. Get started by ensuring your AIS program has the following:

  • Data quality, lineage, bias analysis and accountability exists throughout development
  • Management of algorithms and predictive models with inventory, documentation, interpretability and auditability
  • Evaluating against alternative models, for drift and ensuring traceability
  • Data/algorithm validation, security and retention meets expected standards/protocols
  • Model objectives are specified and validated throughout development

Third-party AI systems

Your organization’s AIS program should establish standards, policies, procedures and protocols for using third-party AI Systems. Get started by ensuring your AIS program has the following:

  • Due diligence and assessment to ensure alignment with legal/company standards
  • Contractual terms with third parties, including right to audit
  • Audits and confirmatory activities conducted to verify third-party compliance with contracts and regulatory requirements

Additionally, the bulletin provides information regarding regulatory oversight and compliance with NAIC’s AI Program, including monitoring and audit activities to confirm compliance such as:

  • Executive oversight/governance structure
  • Inventory and management of algorithms, predictive models and AI Systems used
  • Evidence to support compliance with all applicable AI Program policies, protocols and procedures
  • Data flow diagram/documentation on data sources used, provenance, data lineage, quality, integrity, bias analysis and minimization, suitability and traceability
  • Techniques, measurements, thresholds, benchmarking and similar controls
  • Validation, testing and auditing, including evaluation of drift

Developing an AI governance framework

Creating an AI governance framework at the starting point of your AI journey will ensure your insurance organization is able to maintain oversight and controlled growth of your AI systems and will allow you to comply with incoming regulations more easily. When designing your AI governance framework, it’s important to take a strategy-first approach as strategy should be central to everything you do. As you’re thinking about the strategy of your AI program, consider it under the lens of governance and potential regulation timeline. Then, establishing a governance framework to help to control strategic growth as you execute on your strategy.

It’s important to note that continued monitoring and reporting of AI systems should be occurring during development, implementation and use – not just at the end once the system is in place.

How we can help

Ensuring your insurance organization is properly equipped to adhere to incoming AI regulations by designing and implementing and AI governance framework first will help save your organization time, energy and resources by preventing the need for retrospective efforts.

Baker Tilly's digital team can help your organization define your AI strategy, build an AI governance program or, if you already have things in place, we’ll work with you through the execution and implementation of your AI systems. Interested in learning more? Contact one of our professionals today.

Artificial intelligence in the insurance industry webinar

Below you will find the presentation and recording from our recent webinar, Artificial intelligence in the insurance industry: How to balance innovation with regulatory and ethical considerations. For more information on the subject, and to learn more about how we can assist your organization with its AI strategy, refer to our artificial intelligence and insurance webpages.

Dave DuVarney
Principal
Phil Schmoyer
Principal
John Romano
Principal
dynamic data and technology visualization
Next up

Navigating cyber risks in 2024: Insights, controls and strategies for financial institutions