There is a lot of excitement about Artificial Intelligence (AI) and the innovative solutions it could bring. However, it also brings new regulatory obligations and requirements. But how does AI work for insurance?
A recent survey by Instech.ie showed that 42% of insurers around the world are already investing into Generative AI (GenAI) with 57% of insurers having plans to invest. The Instech survey cited 82% of large insurers investing (or planning to invest) in Gen AI for productivity gains, with 52% of insurers in the survey expecting cost savings of 11-20%.
The EU’s Artificial Intelligence Act has significant implications for insurers using AI. In May 2024, the Act was formally adopted by the European Parliament and Council. The Act was published in the EU Official Journal on 12 July 2024 and the new law comes into force on 1 August 2024. However, the Act allows for a two-year period for when many of the provisions actually come into force. This allows insurers an implementation period to consider how to evidence adherence to the obligations in the Act.
The Act aims to address the risks involved with AI across a wide variety of landscapes, including insurance. The main objective of the Act is to promote the uptake of human-centric and trustworthy AI while ensuring a high level of protection of health, safety, and fundamental rights (amongst others) and to support innovation. The Act provides rights and obligations to different stakeholders in the AI chain. The European Insurance and Occupational Pensions Authority (EIOPA) has set out that the AI Act and insurance sector legislation have complementary objectives which will address any potential conflicts and ensure consistency of supervision.
The Act applies a risk-based approach to the oversight of AI – the higher the risk, the stricter the rules. There are four main classifications under the Act, based on the potential use of the AI system:
- Unacceptable Risk – these systems are prohibited.
- High Risk – these systems are regulated.
- Limited risk – these systems involve transparency requirements to be clear about the AI tools used. This will help users understand that they are communicating with AI and not a human person.
- Minimal risk – these systems have the lowest level of regulation.
All AI systems other than high-risk will have general transparency requirements, AI literacy and voluntary codes.
Insurance and financial services (FS) are called out in the Act as involving high risk levels for the use of AI systems. Therefore, it will be vital for insurers who want to utilise such systems to ensure that there is a clear risk management and oversight framework for such tools. This will require a comprehensive set of requirements including data management, human oversight, impact assessments and Board-level review and oversight. Some of this will already be included in requirements under Solvency II, The Insurance Distribution Directive (IDD) and various conduct and operational level legislation such as the Digital Operations Resilience Act (DORA) and the General Data Protection Regulation (GDPR). It should be noted that AI systems for the purpose of detecting fraud in the offering of financial services and for prudential purposes to calculate insurance undertakings capital requirements will not be considered as ‘high risk’.
The Commission has been tasked with developing guidelines on the AI definition:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
There is a built-in evaluation and review process for the different elements of the AI Act, providing flexibility for adjustments if needed.
In May 2024, the Department of Enterprise and Employment published a consultation on the AI Act seeking views on the implementation of the Act. The consultation explored the issues involved with designating a National Competent Authority (NCA) for the Act, as well as exploring possible synergies between the various EU Regulations relating to digital activity.
While AI undoubtedly has the potential to bring competitive advantage to firms and the insurance sector, there remain important questions to be answered around the rationale for introducing it on a firm-specific basis, the certainty of the benefits versus the uncertainty of the risks and the resource involved in building an appropriate risk and oversight framework specific to this technology.