loading...
logo

Artificial intelligence (AI) has been increasingly adopted by professional services companies under the expectation that its adoption will bring with it an increase in efficiency. That said, the use of this emerging technology is not without its risks, and errors and omissions (E&O) underwriters should, at this juncture, ensure that policy coverage reflects the associated risks and opportunities standing before them.

What is GenAI?

Although the term AI is not novel (in fact its use originates as far back as the 1950s), it has recently garnered a lot of attention for a specific subset of AI known as Generative AI (GenAI), which is the focal point of this article.

While there is no globally recognised definition of GenAI, for the purposes of this article, we would describe it as:

A deep learning algorithm that analyses vast swathes of data in order to produce content (for example, text, images, audio and video) in response to user inputs.

GenAI has exploded in popularity, with examples such as ChatGPT, Google Gemini and DeepSeek. Its usage has expanded from domestic use to being incorporated into the workflows of many companies, including professional services providers.

Risks and rewards of GenAI: professional services companies

Some potential risks and rewards associated with the use of GenAI for professional services companies are outlined below.

Rewards

Efficiency: Automation of repetitive, conventionally time-intensive tasks (such as data entry tasks) allows individuals to focus on other higher-value tasks. However, this comes with the potential added risk of job losses for those employed to specifically undertake these administrative tasks.

Cost reduction: Automating parts of work traditionally done by human employees can reduce the need for large teams on projects. This can lower overhead costs while still maintaining output levels, particularly in firms where labour costs are high.

Accuracy: Depending on the task, the use of an AI model allows for analysis of a whole data set, rather than a sample of data. This could reduce the risk of sampling bias and may improve trend spotting, which may otherwise remain hidden when traditional methods of analysis are used. It may also be the case that AI might reduce the numbers of E&Os, as unsupervised junior staff may be less accurate than a professional relying on GenAI.

Risks: liability for errors as a result of the use of AI

When a professional relies on GenAI in the provision of their advice or service, they are still expected to adhere to the professional standards set out by their respective regulatory body. This means that the individual contracted to provide the service could ultimately be liable in the event that AI produces erroneous outputs which are later relied upon. Below are examples of different types of liability scenarios that could occur as a result of GenAI usage.

When a professional relies on GenAI in the provision of their advice or service, they are still expected to adhere to the professional standards set out by their respective regulatory body.

Hallucinations: GenAI can produce incorrect information, sometimes without caveating the source of the misinformation. This can lead to a professional services company relying on inaccurate outputs. Damien Charlotin, PhD, a research fellow at HEC Paris, has built a database designed to track legal decisions in cases where GenAI produced hallucinated content.[1] Since 14 June 2023, there have been 268 cases across multiple jurisdictions and we expect this figure to increase as adoption of this technology continues. One of the latest UK cases involved a barrister who misquoted legislation as a result of reliance on AI and referred to five cases that did not exist.[2]. The underlying judicial review action was successful, but the court reduced costs by GBP7,000 as a result of the legal team’s reliance on the AI hallucinations. Furthermore, the matter was referred to the regulators for their consideration and the judge stated that using cases hallucinated by AI without verifying them amounts to negligence. This exemplifies how reliance on AI by lawyers could lead to an E&O claim.

Data protection/confidentiality: The submission of private or confidential documents into a public AI model could potentially breach duties of confidentiality or have data protection ramifications. This could lead to regulatory fines or legal liability should the leak of said data be attributable to the professional service provider. In addition, there could be contractual or tortious liability if confidential information provided to a public AI model is later regurgitated.

Use of GenAI by professional services companies    

We categorise GenAI usage by subclass below.

Considerations for E&O underwriters in the context of AI

GenAI’s adoption will continue to increase and with it will come further hallucinations and erroneous outputs, which, if relied upon, could lead to more claims (irrespective of whether damages in the traditional sense are suffered – see barrister case above). As such, it is worthwhile to consider how the E&O market needs to adapt in order to face this new age.

Underwriters should consider whether they intend to cover losses resulting from reliance on inaccurate GenAI outputs by professional services firms.

Wordings written on a liability basis do not distinguish between claims involving GenAI and those that do not. Therefore, underwriters should consider whether they intend to cover losses resulting from reliance on inaccurate GenAI outputs by professional services firms. If such risks are to be excluded, careful attention should be paid to defining GenAI and the wording of an appropriately specific exclusion.  For example, the underwriter may be prepared to cover human failure in not detecting the error, but not an issue with the insured’s systems or software, which would generally fall to a cyber policy. Whether GenAI is excluded under a cyber exclusion depends on the wording of the exclusion. While broader cyber exclusions may encompass GenAI as part of the insured’s computer systems or software, if usage of the computer system/software is incidental to the provision of the professional service, then the broad exclusion may be ineffective. There are also untested coverage issues regarding the use of GenAI and whether it would be considered a professional service or a product. For instance, if the insured has developed their own GenAI system, which then provides erroneous outputs that are relied on, is this more appropriately insured under a product liability policy? Additionally, if an AI tool has repeatedly caused an error, deciding whether an aggregation or series clause should apply could potentially be a difficult endeavour.

Many coverage matters will also rely on fair presentation of the risk and the questions underwriters asked about the use of GenAI within the respective firm. Below is a list of example questions that could be asked by underwriters to aid their assessment of a risk. Please note that the list is illustrative only, is not complete and, if used, will need to be adjusted to work for specific use cases: