• DE
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

The future is now: Professional liability risks in the age of Artificial Intelligence

05 September 2025

AI is rapidly transforming professional services by streamlining routine tasks to boost efficiency. While legal liability for AI errors remains under review, professional firms must navigate an ever-evolving regulatory landscape through strong governance, close oversight and careful risk management.  

“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained”. Dame Victoria Sharp, President of the King’s Bench Division, Ayinde v LB Haringey and Al Haroun v Qatar National Bank [2025] EWHC 1383.

In common with many other businesses, professional service firms are increasingly adopting the use of certain types of Artificial Intelligence in order to maximise productivity and achieve significant efficiencies in time and cost of routine and other tasks formerly carried out by the professionals themselves. Examples include:  

  • automated tools and techniques used by auditors;  
  • design or analysis tools within Building Information Modelling and geographic information systems used by consultants in the construction industry;  
  • valuers using automated valuation models;  
  • quantity surveyors using AI tools to take quantities from 2 and 3 D Models; and  
  • contract and document reviews carried out by legal professionals.  

Interesting questions are starting to arise as to where legal liability would lie in the event of an error made by an AI system giving rise to a loss to a third party (and the Law Commission has just issued a discussion paper in that regard Artificial Intelligence and the Law: a discussion paper – Law Commission). In the meantime, we start by outlining the regulatory background and the risks from using AI, areas which firms and their Insurers will likely be focussing on, alongside the increased need for professionals to develop further their critical thinking skills and the exercise of professional scepticism.  

The term Artificial Intelligence ("AI") was first coined around 75 years ago and for much of that period related to narrow rule-based AI systems which determined how an output was generated in response to an input to the system. It is only relatively recently (and particularly in the 21st century) that there has been a significant advancement in the development of "machine learning".
The complexity and interrelationship of the different AI systems can be shown in this diagram below from page 17 of the recently updated "AI Playbook for the UK Government". 

 

Professional Liability Risks in the Age of Artificial Intelligence

AI Playbook for the UK Government. © Crown copyright. This material is licensed under the Open Government Licence v3.0’ 

Machine learning focuses on creating algorithms or systems that can learn from data and improve their performance without explicit programming. This includes Generative AI ("Gen AI") which generates content based on the data upon which it is trained, and Generative Pre-Trained Transformers ("GPT"s) which can generate text, sounds and video based on a given cue or context.  

Large language models ("LLM"s) are Gen AI models that are particularly trained on enormous quantities of text which would take several human lifetimes to absorb and read, and which can subsequently generate new text on request based on its training, e.g. Chat GPT, Bing Chat, Google Gemini (formerly Bard).  

Limitations and potential issues 

In the professional services context, AI and in particular Gen AI and GPTs, are increasingly being used as tools by professionals as part of their business. Such use requires all users to have a keen appreciation of the limitations of any given AI, an understanding of the basic material on which it has been trained and its source, and the potential issues which may arise from its use.  

These include the fact that very often there will be a long supply chain for the different components and packaging of Gen AI, making it difficult for the user business to check who developed what as part of any verification process. Added to that is the ‘Black Box’ nature / opacity of AI systems, which makes it difficult for the user to understand precisely how the technology works in order to carry out any meaningful checks.  
 
Further, the AI may have been trained on data which is limited and/or biased and any output would reflect this. In particular, the output from public AI chatbots is simply what the model predicts to be the most likely combination of words (based on the documents and data that it holds as source information); it is not necessarily the most accurate or indeed correct answer.
 
As the content generated by AI chatbots becomes increasingly similar to actual human outputs, there is a growing risk of anthropomorphism – users considering such systems as being effectively human, as the interactions with the system take place through use of natural language. This can also lead to such systems acquiring a level of authority that they do not merit.  

It is also well known that Gen AI is both adaptive and to some extent autonomous and may either "hallucinate" to produce a result in a way which (without scrutiny), appears highly credible (as was the case in the submissions to the court that were the subject of consideration in the recent case of Ayinde v London Borough of Haringey & Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin) and or "reward hack" i.e. optimise an outcome but not in a manner foreseen by the developer or user.  

Whilst this can cause issues for the professional utilising the AI system, it can also cause issues when AI systems are used by Claimants to advance their claims against professionals (we are already seeing the use of AI by Litigants in Person to draft Pre-Action Protocol correspondence). There are likely to be significant increases in costs if professionals (and their insurers) and defence counsel are forced to check lengthy AI generated correspondence when defending claims.  

AI systems are continually learning from the material that is provided to them. Any material uploaded to an AI system can be used by that system when generating outputs. The importance of keeping confidential data confidential to the user organisation and not uploaded into an open AI system therefore needs to be regularly emphasised, and professionals will wish to consider the extent to which the use of AI is explained to their clients, as well as considering the use of AI by those in their supply chains. 

UK Government Regulation – Five principles  

The UK Government in its March 2023 White Paper "A Pro-Innovation Approach to AI Regulation" adopted an approach involving five principles which it expects to be used by regulators: This is to be contrasted with the European Union which is adopting a risk based approach via legislation (the EU AI Act), classifying AI systems as Prohibited, High Risk and Low Risk. 

The five principles applicable to AI use which it is proposed be used by sector regulators in the UK are: 
  1. Safety, security and robustness – systems must be able to withstand hacking and AI going "off the rails". 
  2. Transparency and explainability – there should be no black boxes, systems must be clear about how they work and the data used.  
  3. Fairness – there should be no biases, and decisions need to be transparent, providing fair and just outcomes. 
  4. Accountability & governance - oversight, controls and governance structures are required. 
  5. Contestability and redress - users need to be able to challenge output from AI and seek redress where necessary. 

We may yet see statutory regulation being introduced - The Artificial Intelligence (Regulation) Bill was reintroduced into the House of Lords in March 2025 as a Private Member’s Bill (where it currently awaits a second reading). It proposes: 

  • A statutory AI Authority; 
  • Mandatory AI officers; and 
  • AI audit and transparency obligations

UK regulators  

The extent of guidance and any prescriptive measures relating to AI from Regulators of professional services firms varies across sectors and is constantly developing. 

  • Accountants and Auditors: On 26 June 2025, the Financial Reporting Council issued guidance on the use of artificial intelligence in audit: AI in Audit alongside the results of its thematic review into the Automated tools and techniques (“ATTs”) used in audits Audit Thematic Reviews

With effect from 1st July 2025, the Institute of Chartered Accountants in England and Wales's revisions to its code of ethics came into effect, including a specific provision about threats from the use of Technology (200.6.A2) ICAEW Code of Ethics 2025

  • Architects: The Royal Institute of British Architects have issued a report into the increased use of AI by Architects and the implications RIBA AI report 2025  

Interestingly, recent updated guidance issued to the Judiciary specifically sets out that Judicial Office holders should not use LLMs for legal research or for legal analysis as they are not suitable tools. Although it is interesting to note that Judicial Office holders do now have access to a private Microsoft Copilot system within a secure environment which they are able to use - Refreshed AI Guidance published version. 

The Judicial Guidance anticipates that the Courts and Tribunals will increasingly have to deal with lengthy AI generated submissions, particularly from Litigants in Person, as noted above. Court rules may well develop to require parties to notify the court if they have used generative AI in preparing submissions. The Court of the King's Bench in the Canadian province of Manitoba has already implemented such a rule – will the Courts of England and Wales  be far behind? 

Points for firms and their Insurers to consider relating to the use of AI  

The use of AI by professional service firms will be subject to the same assessments, checks and supervision as other methods of service delivery.  

  1. Key points for consideration when using AI include ensuring that businesses can answer and document the following questions: What the governance structures in the business are. Who is accountable for the AI systems and what controls and oversights are there in relation to AI?  
  2. What steps were taken by the business by way of due diligence in relation to the development and supplier of the AI system. 
  3. What the type of AI system is and its limitations. 
  4. What tests and audits are carried out of the AI system including its ability to withstand hacking. 
  5. Whether there are feedback sessions for users to ensure continuous improvement. 
  6. Clear guidance to all users in relation to the purpose of AI and its supervision.  
  7. Ongoing and compulsory training and education to all users around the limitations and potential pitfalls of the AI system and of the need for professional scepticism and critical thinking in relation to all its outputs.  
  8. Clear guidance and training on ethical issues such as privacy, fairness, transparency, confidentiality and accountability. 
  9. Clear and repeated guidance that it is not ever acceptable to enter any client or third party information into an open AI source and any controls in place to prevent this, and  
  10. The terms on which AI use is communicated by the business to clients. 

Professionals are generally liable, under the law of negligence, if they do not exercise the expected reasonable skill, care and diligence when performing their professional duties. Equally, professionals are expected to adopt widely recognised practices and procedures, and to keep their knowledge up to date.  

Whilst use of AI by professionals is increasing, we are not yet at a stage where a professional may be considered negligent were he/she not to use AI. By contrast, the growing phenomenon of ‘AI washing’ - where companies exaggerate or misrepresent their use of AI or the capabilities of their systems – is emerging as a significant risk for directors and officers (D&Os).

In the UK, the Advertising Standards Authority has issued guidance warning companies against misleading claims about AI, emphasising the need for responsible advertising. Meanwhile, in the US, scrutiny of AI washing has intensified, with the Securities and Exchange Commission bringing a series of enforcement actions last year against companies and their D&Os for allegedly making false statements about AI-assisted technologies, such as speech recognition systems -  AI washing: Understanding the risks | DWF Group

As systems develop, though, and their outputs become more reliable, there will come a time in every professional field, where the widely recognised practices and procedures which professionals are expected to adopt will include the use of AI. Given the rate of development of AI and technology generally, this may not be too far away. 

Comment 

Whilst AI systems are an attractive tool to enhance efficiency, productivity and cost savings for professional service firms, as set out above its use must always be subject to clear policies, supervision and education about their limitations and potential risks. At the present time, it remains essential that every AI-generated output is checked by a human. Above all, that there is no substitute for the user applying critical thinking and professional scepticism to test the output.

As was said by Isaac Asimov in I Robot "it is the obvious which is so difficult to see most of the time". With AI (as with all things in the current age) there is an increased need for users to scrutinise outputs for obvious errors or fabrications, and for professional services firms to re-emphasise the importance of the use of professional scepticism.  

The integration of AI into professional services is not just a technological shift but a cultural and operational transformation. As AI systems become more sophisticated, they are increasingly capable of performing tasks that were once the exclusive domain of highly trained professionals. This evolution raises fundamental questions about the future of work, the role of human expertise, and the ethical boundaries of automation. 

As AI systems continue to develop, will clients remain willing to pay for the cost of the professional cross-checking the output of an otherwise competent AI system? At what point can an AI system be trusted? These are issues for the Regulators, as we have noted above. 

Looking further into the future, the Law Commission Discussion paper mentioned above has started a public debate around the issue of who will be liable for harm caused by AI systems as those systems continue to develop? Will there come a point when AI systems are so autonomous, and their reasoning so opaque, that they will be allocated a separate legal personality and held responsible for their actions? If so, might we then see the emergence of an insurance market specifically for professional AI Risks? 

Further Reading: 

AI washing: Understanding the risks | DWF Group

Professional Ethics: Decline or Greater Scrutiny

Further Reading