Artificial intelligence (AI) has been increasingly adopted by professional services companies under the expectation that its adoption will bring with it an increase in efficiency. That said, the use of this emerging technology is not without its risks, and errors and omissions (E&O) underwriters should, at this juncture, ensure that policy coverage reflects the associated risks and opportunities standing before them.
What is GenAI?
Although the term AI is not novel (in fact its use originates as far back as the 1950s), it has recently garnered a lot of attention for a specific subset of AI known as Generative AI (GenAI), which is the focal point of this article.
While there is no globally recognised definition of GenAI, for the purposes of this article, we would describe it as:
A deep learning algorithm that analyses vast swathes of data in order to produce content (for example, text, images, audio and video) in response to user inputs.
GenAI has exploded in popularity, with examples such as ChatGPT, Google Gemini and DeepSeek. Its usage has expanded from domestic use to being incorporated into the workflows of many companies, including professional services providers.
Risks and rewards of GenAI: professional services companies
Some potential risks and rewards associated with the use of GenAI for professional services companies are outlined below.
Rewards
Efficiency: Automation of repetitive, conventionally time-intensive tasks (such as data entry tasks) allows individuals to focus on other higher-value tasks. However, this comes with the potential added risk of job losses for those employed to specifically undertake these administrative tasks.
Cost reduction: Automating parts of work traditionally done by human employees can reduce the need for large teams on projects. This can lower overhead costs while still maintaining output levels, particularly in firms where labour costs are high.
Accuracy: Depending on the task, the use of an AI model allows for analysis of a whole data set, rather than a sample of data. This could reduce the risk of sampling bias and may improve trend spotting, which may otherwise remain hidden when traditional methods of analysis are used. It may also be the case that AI might reduce the numbers of E&Os, as unsupervised junior staff may be less accurate than a professional relying on GenAI.
Risks: liability for errors as a result of the use of AI
When a professional relies on GenAI in the provision of their advice or service, they are still expected to adhere to the professional standards set out by their respective regulatory body. This means that the individual contracted to provide the service could ultimately be liable in the event that AI produces erroneous outputs which are later relied upon. Below are examples of different types of liability scenarios that could occur as a result of GenAI usage.
When a professional relies on GenAI in the provision of their advice or service, they are still expected to adhere to the professional standards set out by their respective regulatory body.
Hallucinations: GenAI can produce incorrect information, sometimes without caveating the source of the misinformation. This can lead to a professional services company relying on inaccurate outputs. Damien Charlotin, PhD, a research fellow at HEC Paris, has built a database designed to track legal decisions in cases where GenAI produced hallucinated content.[1] Since 14 June 2023, there have been 268 cases across multiple jurisdictions and we expect this figure to increase as adoption of this technology continues. One of the latest UK cases involved a barrister who misquoted legislation as a result of reliance on AI and referred to five cases that did not exist.[2]. The underlying judicial review action was successful, but the court reduced costs by GBP7,000 as a result of the legal team’s reliance on the AI hallucinations. Furthermore, the matter was referred to the regulators for their consideration and the judge stated that using cases hallucinated by AI without verifying them amounts to negligence. This exemplifies how reliance on AI by lawyers could lead to an E&O claim.
Data protection/confidentiality: The submission of private or confidential documents into a public AI model could potentially breach duties of confidentiality or have data protection ramifications. This could lead to regulatory fines or legal liability should the leak of said data be attributable to the professional service provider. In addition, there could be contractual or tortious liability if confidential information provided to a public AI model is later regurgitated.
Use of GenAI by professional services companies
We categorise GenAI usage by subclass below.
Lawyers
At the tail end of 2023, over 60% of large law firms said they were exploring the potential of GenAI.[1] We would expect this number to have risen since then.
Presently, law firms are using GenAI for a range of tasks, including:
- Risk identification and prediction: for example, undertaking routine anti-money laundering checks.
- Administration: gathering and reviewing information from existing or potential clients. For example, using legal chatbots that can offer 24/7 responses to common legal questions and triage cases based on urgency for internal referral.
- Searches: automation of work, such as document discovery or identification of precedents for litigation.
- Producing court documentation (bundling).
- Text generation: contract drafting, summarisation work, or writing articles or client letters.
In 2025, the Solicitors Regulation Authority (SRA) approved the first AI-based law firm, Garfield.Law.[2] Garfield.Law acts as a legal assistant to help small to medium enterprises (SMEs) pursue unpaid debts. It cannot provide advice on the merits of a case; instead, it directs the user through the steps necessary to take a debt-recovery case to trial and can draft particulars of claim. Garfield.Law does advise users to fact check its outputs before issuing to the debtor or court, and states it has strong internal processes to check its outputs before they are released to the client. Prior to approval, the SRA had to assure themselves that the firm’s processes were aligned with their requirements, including a minimum level of insurance for the regulated firm. Their decision to approve the firm was predicated on holding select named regulated solicitors within the firm accountable, should any system outputs go wrong.
[1] SRA | Risk Outlook report: The use of artificial intelligence in the legal market | Solicitors Regulation Authority
[2] SRA | SRA approves first AI-driven law firm | Solicitors Regulation Authority
Accountants
Seventy percent of accountants in a survey undertaken by Chartered Accountants Worldwide said they utilise GenAI in their work on at least a monthly basis, and 83% of 18–24-year-old accountants surveyed use GenAI at least once a week.[1]
Accountants are utilising GenAI for tasks including:
- Accounting/bookkeeping automation: automatically categorising expenses, reconciling of accounts and generating of financial reports, potentially reducing the risk of manual errors.
- Tax return preparation: extraction and analysis of various financial documents.
- Document review: summarisation of contracts, invoices and receipts to identify anomalies that merit further investigation.
The ‘Big 4’ accountancy firms have also invested significantly into AI, developing their own platforms to integrate into their processes or for onwards licensing to their clients.
Architects
GenAI adoption by architects stands at around 41%, according to a survey undertaken by the Royal Institute of British Architects, with 49% of respondents stating that AI could help in the design of complex building projects.[1]
Below are some of the most common uses of GenAI by architects:
- Early design stage visualisations: for example, conceptual renderings created at the beginning of a project. GenAI is utilised because these renders can be rapidly generated, as opposed to being manually designed.
- Generative design: GenAI has been used to evaluate thousands of permutations to suggest performance-optimised designs that might not be immediately obvious to a human designer.
- Parametric design: where parameters in a design need to be changed, GenAI can do so in real-time to test how the changes affect the design as a whole.
- Model generation: automation of routine modelling tasks, such as converting sketches or floor plans into detailed 3D models to save time.
Looking ahead, 57% of those who took part in the survey believe that GenAI will improve efficiencies in the architectural design process in the next two years.
Considerations for E&O underwriters in the context of AI
GenAI’s adoption will continue to increase and with it will come further hallucinations and erroneous outputs, which, if relied upon, could lead to more claims (irrespective of whether damages in the traditional sense are suffered – see barrister case above). As such, it is worthwhile to consider how the E&O market needs to adapt in order to face this new age.
Underwriters should consider whether they intend to cover losses resulting from reliance on inaccurate GenAI outputs by professional services firms.
Wordings written on a liability basis do not distinguish between claims involving GenAI and those that do not. Therefore, underwriters should consider whether they intend to cover losses resulting from reliance on inaccurate GenAI outputs by professional services firms. If such risks are to be excluded, careful attention should be paid to defining GenAI and the wording of an appropriately specific exclusion. For example, the underwriter may be prepared to cover human failure in not detecting the error, but not an issue with the insured’s systems or software, which would generally fall to a cyber policy. Whether GenAI is excluded under a cyber exclusion depends on the wording of the exclusion. While broader cyber exclusions may encompass GenAI as part of the insured’s computer systems or software, if usage of the computer system/software is incidental to the provision of the professional service, then the broad exclusion may be ineffective. There are also untested coverage issues regarding the use of GenAI and whether it would be considered a professional service or a product. For instance, if the insured has developed their own GenAI system, which then provides erroneous outputs that are relied on, is this more appropriately insured under a product liability policy? Additionally, if an AI tool has repeatedly caused an error, deciding whether an aggregation or series clause should apply could potentially be a difficult endeavour.
Many coverage matters will also rely on fair presentation of the risk and the questions underwriters asked about the use of GenAI within the respective firm. Below is a list of example questions that could be asked by underwriters to aid their assessment of a risk. Please note that the list is illustrative only, is not complete and, if used, will need to be adjusted to work for specific use cases:
- What is the purpose of the AI being used in the company and has the company set out a clear set of circumstances where they consider AI to be of use and where it is not?
- How have the tools been selected and to what extent have the tool’s terms and conditions been reviewed for exposures such as ownership of the data?
- Has an acceptable use policy been implemented in the company, is it distributed to all employees and is it regularly reviewed/updated as new products come to market?
- What data is being entered into the models and how does the firm use proprietary or confidential information in the models?
- What training does the company provide to its staff on the grounding and framing of AI and what procedures are in place to assess the outputs for accuracy and acceptability?
- Is the use of any tool being disclosed correctly to third parties, where, for example, advice being given has been created in part or wholly with AI?
- Does the client have existing cyber coverage? If so, does the cyber policy contain a ‘other insurance’ clause?
- How is the firm billing clients for work involving AI and has it changed from ‘traditional’ advice methods?
- Has the client agreed or signed a contract confirming the use of AI and does this include limitations of liability?





