loading...
logo

Executive summary

In the absence of market underwriting and claims data on AI exposures, the Lloyd’s Market Association (LMA) conducted a market opinion survey to explore key issues relating to insured AI risks.[1] The survey explored:

  • whether insureds are using AI, right now, in ways that could cause or contribute to claims
  • which loss scenarios LMA members are most concerned about
  • views on whether or not insureds are managing the risks adequately
  • views on the potential scale of losses.

The survey was conducted in mid-2025, with responses invited on a wide range of potential AI loss scenarios (see Annex A for the full list). Four scenarios have been included in this analysis:

  1. Professional Indemnity: “AI produces erroneous advice/service to clients, causing a loss.”
  2. Product Recall: “Recall required following property damage and/or bodily injury caused by defective products designed and/or manufactured by AI, e.g. food contamination.”
  3. Accident and Health (A&H): “Self-driving car error causes injury to passenger: first-party A&H policy could respond in first instance.”
  4. Cyber: “System downtime caused by AI malfunction and resulting Business Interruption.”

Results

Based on the opinions provided by LMA members to the survey (of which 94% were underwriters), each scenario has been rated for viability (score out of 5), magnitude (score out of 5) and overall impact (score out of 25; see methodology for further information).

 ViabilityMagnitudeOverall impact
Professional Indemnity2.82: Plausible3.8: Medium10.72: Moderate
Product Recall1.45: Highly unlikely3.15: Medium4.57: Minimal
A&H2.6: Plausible3.15: Medium8.19: Low
Cyber3.42: Plausible2.87: Minor9.82: Moderate

Answers to research questions

Are insureds using AI, right now, in ways that could cause or contribute to claims?

Yes.

Three of the four scenarios (Professional Indemnity, A&H and Cyber) were rated as “plausible” (the middle rating on the five-point scale), indicating that respondents felt that these loss scenarios could occur in real-world settings but were credible risks rather than “likely” or “very likely”.

The Product Recall scenario was rated as “highly unlikely” for viability, perhaps indicating that insurers do not believe that insureds are not generally using AI, at present, for product design or manufacturing, in ways that could lead to insured losses caused by AI errors.

The survey also indicated that AI usage (in the ways described in the scenarios) was expected to increase, which could increase exposure to AI error-driven losses in the future. 67% of the total responses agreed that AI usage would increase in the next 12 months and 86% agreed that AI usage would increase in the next two to three years.[2]

Respondents to the Professional Indemnity scenario indicated more regularised use of AI in financial and professional lines at present, compared with other product areas, with 100% of responses suggesting AI was being used, at least to some extent, right now by insureds. 79% of responses also suggested AI usage is expected to increase over the next 12 months.

In Product Recall, live exposure to risks arising from use of AI in product design or manufacturing is currently viewed by respondents as low. However, significant growth potential is expected in the short to medium term.

Respondents to the Cyber scenario agreed that there was at least some degree of system failure risk posed by AI usage at present.

Note: This survey did not assess any protective effects of AI use – the focus was on loss potential only.

Which loss scenarios are LMA members most concerned about?

Of the four scenarios in this summary, the Cyber scenario received the highest score for viability, while the Professional Indemnity scenario received the highest scores for both magnitude and overall impact. The overall impact ratings (calculated by multiplying the viability ratings by magnitude ratings) scored the Professional Indemnity and Cyber scenarios as “moderate” (the middle rating, in between “low” and “major”).

Respondents generally agreed that there is considerable potential for AI-related losses to arise in the provision of professional services, given the wide usage of AI large language models by insureds.

The potential overall impact of the product recall scenario was rated as “minimal” and the A&H scenario was rated as “low”.

Note: The level of concern/scale of any loss would also depend on the terms and conditions of applicable wordings and use of exclusions. In these scenarios we did not ask respondents to apply a specific wording, given a wide range of cyber clauses are in use in most lines of business. When estimating loss potential, we asked respondents to assume that the loss was covered. Also see Section 5: Treatment of AI risk in LMA model wordings.

Are insureds managing the risks (of AI errors causing insured losses) adequately?

LMA members believe that they largely are, with some variability. The total responses indicated reasonable confidence that insureds were testing AI use in representative conditions, with more than two-thirds of responses suggesting it was “very likely” (51%) or “somewhat likely” (20%).

Only 18% of global responses suggested that insureds’ existing AI risk management/safety procedures are inadequate. The majority of responses indicated that risk management is either “adequate to reduce the risk to an acceptable level in most cases” (45%) or “adequate to reduce the risk of this scenario causing a severe loss” (37%).

In the Professional Indemnity scenario, 89% of respondents felt that existing risk management/safety procedures were “adequate” (vs 11% “inadequate”). This indicates a high degree of confidence in insureds to manage the risks arising from use of AI systems.

In the A&H scenario, where losses arise due to an accident caused by a self-driving vehicle (and given the obvious risks arising from AI failure in this scenario), it is reasonable to expect extremely careful risk-management strategies to be deployed. The majority of respondents support this hypothesis, with 64% of responses agreeing that testing by risk managers was adequate to reduce the risk to an acceptable level “in most cases” and 9% said testing was “adequate to reduce risk of causing a severe loss”. However, 27% said that testing was “inadequate to reduce or manage risks”, so this confidence was not universally shared.

Note: The survey explored the confidence levels/opinions of respondents regarding insureds’ risk management. The survey did not collect risk management data, claims information or data from external sources.

Respondents’ views on the potential scale of losses?

The magnitude ratings for all scenarios were either “medium” or “minor”, with respondents clearly not fearing full-limits exposure, based on weighted-averages (note: although there were a wide range of views on the loss potential of each scenario – see detailed analysis below).

The Professional Indemnity scenario produced the highest magnitude rating of 3.8 out of 5, at the higher end of the “medium” rating. The potential for losses was estimated as reasonably significant, with 58% of responses of the view that losses arising from this type of scenario might reach full policy limits (vs 33% of the total responses) and 5% of responses estimated that losses might be around 80% of policy limits.

Despite attracting the highest score for viability, the Cyber scenario’s magnitude rating was “minor” (2.87 out of 5), indicating the view from respondents that this scenario is likely to produce relatively low-level losses, with over a third of respondents suggesting a loss might scale at 15% of typical policy limits. However, one quarter indicated losses arising from this type of scenario might reach full policy limits.

Note: These observations are indicative of the views of the survey respondents and have been extrapolated to make general observations in some cases; the actual values of any individual loss would depend on the circumstances, policy wording, applicable terms and conditions, limit of coverage, etc.

Treatment of AI risk in LMA model wordings

Drawn from a related piece of work commissioned by the LMA Chief Underwriting Officers’ Committee, Section 5 of this paper sets out a high-level analysis of potential AI exposures arising in LMA model Cyber wordings. A summary table of the current position is set out in Annex 2.

Conclusions

  1. The risks presented by the AI loss scenarios reviewed in the survey attracted, on the whole, only a modest degree of concern amongst underwriters. The risks were widely expected to grow in the future.
  2. Respondents felt that the Professional Indemnity scenario was the most concerning of the scenarios reported on and Product Recall was the least concerning.
  3. Testing and risk management by insureds is key to ensuring that the risks are managed to an acceptable level.
  4. The survey attracted a wide range of views and there is a lack of underwriting and claims data about AI exposures. Case studies, with detailed event profiles, applying specified wordings and limits of cover may usefully reveal more information about potential AI exposures.
Disclaimer
This document has been produced by the Lloyd’s Market Association (LMA) for general information purposes only. While care has been taken in gathering the data and preparing the document, LMA makes no representations or warranties as to its accuracy or completeness and accepts no responsibility or liability for any loss arising as a result of any reliance placed on the contents. This document does not constitute legal advice. 

[1] The LMA definition of AI, for the purposes of this survey, is: “software that is automated and generates information outputs based on a statistical model of data taken from similar or informative scenarios”. See introduction below for further information.

[2] “Total responses” refers to all responses across the entire range of scenarios

Introduction

Artificial intelligence (AI) systems and tools have been an extremely hot topic in insurance for several years and are likely to remain so for the foreseeable future. Many boards of Lloyd’s managing agents have recently cited AI risk as second only to geopolitical risks on risk registers.[1] In order to explore some of the key themes emerging from insured AI risk, the LMA conducted a survey of members, asking for views across a range of potential loss scenarios.

Primarily, the survey was intended to research:

  • LMA members’ views on whether insureds were using AI, right now, in ways that could cause or contribute to claims
  • which loss scenarios LMA members were most concerned about
  • LMA members’ views on whether insureds were managing the risks adequately
  • members’ views on the potential scale of losses.

Definition of AI

The definition of AI used in the survey describes an AI system as “software that is automated and generates information outputs based on a statistical model of data taken from similar or informative scenarios.

In the survey, respondents were encouraged to think about AI very broadly, including any computer system able to perform tasks that normally require human intelligence, under varying and unpredictable circumstances, without significant human oversight, such as visual perception, speech recognition, decision making and language translation. AI systems could include large language models, machine learning software and deep learning software.

Use of cyber affirmations and exclusions in policy wordings

The survey also collected some general views on whether AI exposures might typically be covered or excluded in policy wordings. The LMA previously explored this issue and briefed the LMA Chief Underwriting Officers’ Committee in 2024 on a high-level analysis of LMA model wordings exposure to a list of AI loss scenarios. An updated summary table of this work is provided in Annex 2.

Some high-level questions on the use of exclusions were included in the survey but are not reported in this summary. On review, it was not appropriate to report on generalised observations about coverage/exclusion of losses without narrowing down the scenario(s) to an exact set of circumstances and the exact clause that was used; this is outside the scope of the survey and would be more accurately researched via case studies.

Disclaimers

The observations reported in this summary do not relate to specific losses, wordings or exclusions. An authoritative view on coverage or exclusion of losses arising from a specific loss would require a review of the facts and circumstances and application of the policy wording(s) used.

Also, we did not explore the protective effects of AI in this survey, taking into account the benefits that could accrue to insureds and (re)insurers arising from the use of AI, which could be material.


[1] Lloyd’s, 2025. Emerging Risks Report.

Methodology

A 31-question survey was developed and issued by the LMA to members in April 2025. Members were invited to select any of the listed scenarios and complete the question set and were encouraged to submit responses for any of the scenarios within their area(s) of expertise.

The LMA developed a lengthy suite of AI loss scenarios: 65 in total, across 10 high-level classes of business. Some scenarios were invented for this exercise and some were based on events that have (or may have) already occurred. The scenarios were validated with the LMA’s underwriting committees, to ensure that each scenario was believed to be technically viable, even where the probability of a particular scenario occurring was suspected to be very low.

The scenarios were deliberately broadly drafted, to try to encompass a wide range of AI use cases/loss scenarios and attract as many responses as possible, on the basis that this survey is exploring AI risks at a high-level only. For example, in the Professional Indemnity Scenario 3.1 “AI produces erroneous advice/service to clients, causing a loss”, the type of advice/service provided is not specified as the intention was to capture any type of advice or service provided to any type of professional client. This approach does have its limitations in terms of the degree to which the views and opinions captured might be generally applicable, but the intention was to capture high-level views on generic AI loss scenarios for major insurance products.

Respondents

144 responses to the survey were collected in Q2 and Q3 2025, 94% of which were from underwriters. The distribution of responses by job role was:

  • Leadership/C-suite: 12%
  • Head of Class/Senior Underwriter/Senior Manager/Senior Product Manager: 33%
  • Manager: 31%
  • Team member: 25%

Many respondents advised that the survey questions were difficult to answer. Not enough responses were collected to analyse every scenario included in the survey. Therefore, this summary focuses on the four scenarios with the highest response rate, with a minimum threshold of six responses per scenario, though up to 19 responses were received for some scenarios.

Respondents reported the survey as difficult to complete, for two main reasons:

  • Insurers are remote to insureds’ use of AI; it is not always clear whether an insured is using AI in the ordinary course of their business and little data on AI usage is currently collected during underwriting, so survey questions exploring this usage have relied on underwriters’ views and opinions rather than data.
  • There is a lack of claims data on AI contribution to losses, again meaning that the survey responses are informed by views and opinions rather than data in most cases.

Future research may be able to draw more directly on underwriting and claims data. However, the key observations arising from this survey still address the LMA’s overall goal of contributing to the sum of knowledge on AI risks and sharing the insights arising from this research with LMA members.

Scenario viability, magnitude and overall impact ratings

Each of the four scenarios that feature in this analysis have been given a viability rating and a magnitude rating, to summarise respondents’ views on these core issues. The ratings are based on an assessment of respondents’ views on key questions in the survey, regarding whether the scenarios are believed to be technically possible or not, and the level of concern about the loss potential. An overall impact rating for each scenario has also been produced by multiplying the viability and magnitude ratings together.

The viability rating is an assessment of whether the scenario is believed to be technically possible or not, based on responses to questions that probed:

  • whether AI was thought to be in use in live situations, in the way set out in the scenario
  • if so, whether it was widely or narrowly deployed
  • whether AI was making “real-world” decisions.

The viability rating is expressed on a five-point Likert scale, using weighted average responses:

 Viability ratingRationale
1Highly unlikelyResponses indicated that this scenario was highly unlikely to occur
2UnlikelyResponses indicated that this scenario was unlikely to occur
3PlausibleResponses indicated that this scenario was plausible
4LikelyResponses indicated that this scenario was likely to occur
5Highly likelyResponses indicated that this scenario was highly likely to occur

The viability rating also touches on potential frequency; the higher the score, the more likely respondents felt the scenario was to occur and produce a real loss event.

The magnitude rating summarises respondents’ opinions on the potential scale of loss, should the scenario occur for real. The rating is drawn from opinions received on the question exploring the potential scale of loss, expressed as a percentage of typical limits of cover. The magnitude rating is expressed on a five-point Likert scale, using weighted average responses:

 Magnitude ratingRationale
1Low valueResponses indicated that this scenario could produce low-value losses only
2MinorResponses indicated that this scenario could produce minor losses
3MediumResponses indicated that this scenario could produce medium losses
4SizeableResponses indicated that this scenario could produce sizeable losses
5Full limitResponses indicated that this scenario could produce full-limit losses

An overall impact rating for the four loss scenarios has been produced by multiplying the viability rating and magnitude rating of each scenario together. This rating is intended to summarise the respondents’ opinions on the potential overall impact of a loss scenario, reflecting views on both the viability of the scenario and potential scale of loss in the event the scenario occurred.

 Overall impact ratingRationale
<5MinimalOverall impact (per loss) estimated as potentially minimal
<9LowOverall impact (per loss) estimated as potentially low
<13ModerateOverall impact (per loss) estimated as potentially moderate
<17MajorOverall impact (per loss) estimated as potentially major
17+SevereOverall impact (per loss) estimated as potentially severe

Note: The ratings are intended to be indicative of the opinions of respondents to the survey; they are not based on actuarial calculations or driven by claims, exposure or risk management data.

Key loss scenarios

Detailed analysis and commentary is provided below on four scenarios:

  • Professional Indemnity scenario 3.1: “AI produces erroneous advice/service to clients, causing a loss.”
  • Product Recall scenario 4.13: “Product Recall due to property damage and/or bodily injury caused by defective products designed and/or manufactured by AI, e.g. food contamination.”
  • Accident and Health (A&H) scenario 1.1: “self-driving car error causes injury to passenger: first-party A&H policy could respond in first instance.
  • Cyber scenario 10.1: “System downtime caused by AI malfunction and resulting business interruption.

Responses to each of the four scenarios have been collated into three areas of analysis:

  1. The use of AI by insureds and growth potential in the near future.
  2. Risk management.
  3. Potential impact on losses.

For each of the four scenarios, a rating for viability, magnitude and overall impact has been provided. Please see Section 3 above for further information on the methodology of these calculations, as well as commentary on the trends and data points of interest.

Treatment of AI risk in model wordings

In 2024, the LMA undertook a review of the potential AI coverage position in the LMA’s suite of model cyber clauses. A summary table is provided in Annex 2, updated to include model cyber wordings published up to November 2025.

In this review, AI is recognised as a sub-set of software and therefore falling within the definition of Computer System, widely used in LMA model Cyber clauses:

Computer System means any computer, hardware, software, communications system, electronic device (including, but not limited to, smart phone, laptop, tablet, wearable device), server, cloud or microcontroller including any similar system or any configuration of the aforementioned and including any associated input, output, data storage device, networking equipment or back up facility, owned or operated by the Insured or any other party.

The potential AI coverage position, for each model clause, is indicated with a “low”, “medium” or “high” rating, based on assessment of whether the clause is:

  • Low: fully excluding cyber risks.
  • Medium: an exclusion of cyber risks with a write-back of cyber coverage or an affirmation of cyber risks coverage with a limitation.
  • High: a full grant of cyber coverage with no cyber-specific restrictions but subject to the other terms and conditions of the policy.

Disclaimer: The summary table in Annex 2 is intended to be indicative of potential AI (cyber) risk; the actual risks present in any contract will depend on the wording(s) used and the circumstances. The LMA accepts no liability to any party in respect of the information provided herein.

Annex

Download Annex 1 and 2 below.

Annex 1: Full list of AI loss scenarios included in 2025 survey

Annex 2: List of LMA cyber wordings categorised for AI exposure