loading...
logo

Understanding artificial intelligence risk in insurance products – the challenges

13 April 2025

Head of Technical Underwriting,
Lloyd’s Market Association

Like many in our industry, the Lloyd’s Market Association (LMA) is at the start of the journey in understanding artificial intelligence (AI) and thinking through the implications for insured exposures. The LMA’s role, as ever, is to assist our members and to make the market a better place. In 2023, we hosted a series of educational events to begin exploring what AI is all about, considering some of the use cases for insureds and the legal frameworks that might govern the risks in the UK, EU and US, primarily from a liability insurance perspective.

We are now thinking a bit more deeply about potential AI loss-scenarios for various insurance products and the use of model cyber clauses to insure, limit or exclude the attendant risks. Could an AI malfunction in an industrial process cause pollution in a river? Could an AI avatar used in healthcare administration provide a negligent service to a patient? Would such events result in insured losses? If so, would they be severe or minor? High or low frequency? To try and better understand these issues, we have launched a market survey to gather perspectives from underwriters and other professionals within the Lloyd’s market. We look forward to sharing our findings and insights in the coming weeks. In developing the survey, we have encountered some complex concepts and we hope this process will build a clearer understanding of these issues.

What is AI?

A widely agreed taxonomy of AI systems has yet to be adopted by the international community of AI developers, users and regulators. However, such a framework may well emerge in due course, as it did in the related world of autonomous vehicles, where the six-point scale originally proposed by the US Society of Automobile Engineers in 2014 has since become a widely used and useful reference point for discussion.

Various descriptions and definitions of AI systems are available, such as the OECD’s Framework for the Classification of AI Systems , which is aimed at lawmakers and regulators, and the National Institute of Standards and Technology’s AI Use Taxonomy, a classification system based on 16 categories of AI function (for example, content creation, decision making, image analysis and monitoring). There are dozens more, including some very flowery descriptions of AI systems put forward by software developers with products to sell. In our experience, it is generally wise for the insurance industry to review the work of others and then develop a more nuanced approach tailored to its specific needs.

The LMA is thinking about AI very broadly at present. Fundamentally, AI is software and so, for insurers, AI is a subset of cyber risk. AI systems could include “any computer system that is able to perform tasks that would otherwise require human intelligence, without significant human oversight, such as visual perception, speech recognition, decision making and language translation”. AI systems could include large language models (LLMs), machine learning software and deep learning software .
Thinking about AI in the context of contractual language within insurance policies, most LMA model cyber clauses (of which there is a vast library) use defined terms to describe various cyber-related concepts and our definition of “computer system” includes software, which by extension already includes AI systems.

Given that AI is a subset of cyber risk, is there any need to differentiate AI from other cyber risks in policy language? For instance, if software is used to control the addition of chemicals to drinking water, does it matter whether the software uses AI technology or not? A negligent insured is still likely to be liable for damages either way if their software contaminates the water and customers are injured. If the deployment of AI does change risk profiles, it may be necessary to amend our definition of “computer system” in the future, to develop model definitions of AI systems, facilitating more nuanced coverage, limitations or exclusions as required by the market.

What are insureds doing with AI?

It is difficult to know for certain, as insurers are continually learning about AI’s uses. However, we are aware that LLMs are likely to be used in service industries already or in the near future, perhaps very widely. AI is also likely being used in industrial design and operations, either now or in the near future. There are limitless commercial applications for clever software. A 2024 McKinsey & Co survey found that 72% of US businesses were using AI for at least one function, up from 50% in 2022 and 20% in 2017.

In our market survey, we are assessing how much respondents currently know about their insureds use of AI – in the contexts described in the loss scenarios – either now or in the near future. We are also gathering insights into whether AI systems are still in a testing phase or are already fully deployed, as well as respondents’ views on whether typical clients are innovators, early adopters, later adopters or outright Luddites!

The source of this information is the market’s experience in dealing with brokers and customers daily and possibly, in some cases, handling claims that already include an element of AI contribution. It will be interesting to review the survey results and share these insights with the wider market.

We may find that underwriters do not yet know as much about AI systems as they might wish to. It has been suggested that we should explore AI use further by speaking directly to insureds, major industry associations, consultants and other stakeholders, to expand our knowledge. This is something we would be happy to consider, especially in relation to loss scenarios that are of high concern to the market.

How and when will AI go wrong? Which insurance products might respond?

As outlined above, there are many different types of AI systems and customer uses. The error rate in each context is unknown – errors may be systemic or unique to each circumstance. The error rate could certainly be non-linear, given that some AI systems develop capability in real time, meaning they might give different responses to the same stimuli at different times.

We certainly anticipate overlapping product responses in some circumstances and are exploring this across each of the scenarios in the survey. For example, in the water contamination liability scenario referenced above, the same event could trigger both third-party claims for bodily injury and first-party accident and health claims.

The risk of loss will be driven by the usual factors: circumstances, risk controls, errors, bias, malicious actors and fortuitous events – a complex mix. In our survey, we are exploring views on a range of factors that could have some bearing on loss frequency/probability in each scenario. These include: Does the AI system decide what real-world actions to take? What are the sources of data used by the AI? Has the AI system been fully tested in the correct circumstances? Is it supporting a human decision-maker or replacing human functions without supervision? Respondents may not have all the answers, but these are interesting areas to explore and the responses may indicate areas for future research.

Protective effects and risk mitigation

It should be noted that while our market survey focuses on the loss potential of AI systems in various scenarios, there are likely to be protective effects of AI that could reduce the overall risk profile of many insureds. AI systems are being introduced into commercial processes to improve efficiency, speed up product design and development, and reduce costs. It is entirely possible that insureds – and by extension, insurers – may see correlated benefits in terms of risk reduction and improved hazard management.

We are exploring how claims might arise at law – in tort, defect or bespoke regimes – which will depend on the circumstances of the loss, as well as the applicable law and jurisdiction.

We are also exploring how losses arising from AI system errors might be mitigated. While AI can fail, human intervention, systems and controls should serve to mitigate; are current systems likely to be adequate to reduce the risk of a scenario causing a loss in most cases? In all cases?

And of course, the use of cyber clauses – which remain mandatory for nearly all classes written on Lloyd’s paper – could have a significant impact on any loss, depending on whether the clause in use affirms coverage, applies a sub-limit or excludes the risk entirely. We are exploring this issue in the final section of our survey.

Knowledge gaps

Above all, we are conducting a market survey on AI loss scenarios to explore what we know – and don’t know – about the current use of AI systems by insureds. While there are many variables, there is also a lack of data. The use of AI by insureds is likely to grow rapidly in the next few years. One goal is to identify scenarios that seem remote, unlikely and/or have a low severity potential, so that these do not distract the market (and other stakeholders, such as regulators) from more pressing scenarios – those with high probability and severity, which would be risks to monitor and manage carefully.

We look forward to sharing the results and insights from our survey with the market.

Related Posts