• AE
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

AI: The future is here

10 July 2024

There is a scene in the 2019 legal thriller Dark Waters in which lawyer Robert Bilott takes delivery of several hundred boxes of papers to review. Would AI have helped or hindered?

Cast your mind back to the scene in 2019 legal thriller Dark Waters in which lawyer Robert Bilott takes delivery of several hundred boxes of papers disclosed by a chemical manufacturing corporation as part of a court-ordered discovery process.

"No one can go through all this... Not in a million years" exclaims a colleague, not counting on the tenacity of the plucky hero as he painstakingly works his way through thousands of pages, armed with nothing more powerful than sticky notes and a black marker.

The story is a good (and true) one, but how much easier would the real-life Mr Bilott's task have been if he had had the benefit of artificial intelligence (AI) to scan the documents for him and produce a list of any and all references to harmful chemicals?

The answer may be "be careful what you wish for".

What is AI?

AI tools are programmes which enable machines to simulate human intelligence and problem-solving capabilities. AI is becoming increasingly prevalent in the legal world, where it is capable of summarising documents, reviewing contracts for key clauses, researching case law, and even in some instances predicting the outcome of court cases with remarkable accuracy. There is no doubt that AI tools offer considerable benefits for lawyers, in terms of automating routine activities, removing the potential for human error and freeing up staff to deal with more personalised tasks.

There are, however, risks associated with reliance on AI which are not to be disregarded.

User beware

There is the famous incident of two lawyers who were fined by a US judge for unknowingly submitting citations that were “hallucinated” by ChatGPT, a free-to-use generative AI model that produces responses to human prompts which, while sounding convincing, do not require to be truthful.  

It has been reported that AI systems will often produce inaccurate results, as the tools tend to rely on information from certain time-periods or data sets. For example, it is entirely possible that an AI system could miss an up-to-date legal development such as a critical court judgment or a written commentary.

There are other, more sinister dangers. In the USA, where the use of AI is common in assessing an individual’s risk of reoffending, data analysis shows that computer algorithms producing these predictions regularly turn up racial disparities, with black defendants much more likely than white defendants to be forecast as likely to re-offend or commit violent crimes. These algorithms are based on human inputs, and therefore frequently reflect the very human biases they are intended to circumvent.

There are also risks around sharing confidential or client-sensitive information with AI systems; information given to a third party provider, which may itself be unregulated, could end up being used however that provider sees fit. Client information should never be fed in to open AI, and if you subscribe to personalised tools, make sure the contract is clear about how data is used and protected.  

Looking forward

Efforts are already underway to mitigate against some of the risks. In January 2024, LexisNexis announced the launch of its own generative AI, offering conversational search, summarisation and drafting features. Because the responses generated by this AI are based exclusively on LexisNexis's own content, it is claimed the programme does not fall foul of the "hallucinations" produced by ChatGPT and other large language models trained on swathes of, sometime inaccurate, data.

While there is still little in the way of formal regulation for AI at present, attempts are underway to codify AI ethics in a manner which may aid its use in the legal profession without falling foul of practical and ethical mishaps. The UK and the EU are both proposing a Regulatory Framework for AI which will include various principles relating to governance, fairness, and data security.  Additionally, the Alan Turing Institute has, in conjunction with the ICO, prepared regulatory guidance on the legal and ethical ramifications of AI: Explaining Decisions Made with AI. The guidance lays out four key principles: be transparent; be accountable; consider the context; and reflect on the impact of your AI system.

The guidance identifies that increased reliance on AI systems risks allowing these systems to become the trustees of human decision making. By following the guidance, there is hope that AI systems will instead be used for the benefit of humankind, without compromising the need for a human mind when it comes to creative, persuasive, nuanced and ethical decision making.

The Law Society of Scotland will soon be issuing a “Guide to Generative AI for the Profession” which should help practitioners make informed decisions about how to incorporate the use of generative AI tools into their legal practice safely and effectively. There is also the AI Code of Conduct for the insurance claims industry, a voluntary commitment for the development, implementation and use of artificial intelligence in claims, of which DWF is a signatory.

Finally, when using AI it is necessary to work out which existing regulations apply to the context in which the AI is used, and the purpose it achieves. GDPR is, of course, the most obvious example, but other regulations may also be engaged.

Next steps for Scottish lawyers

As technologies mature and AI tools become more consistent in their performance of tasks, these systems will be called on to do more. Their impact on the legal sector is, however, still unknown and unpredictable.

Law firms should address AI technology from a risk management perspective in order to mitigate against the potential for reputational damage and professional negligence claims. Use of AI systems should be addressed within your firm's terms and conditions, and you should be clear as to the limitations of any technology used.

Keep in mind that the Master Policy will not extend to cover any third-party supplier of AI systems or services, and law firms will need to assess their liability and insurance cover in relation to the use of any tool. Appropriate due diligence should be undertaken on suppliers, particularly when it comes to new or untested offerings. If you have any concerns regarding the insurance position, you should contact the Master Policy brokers, Lockton.

And finally, there are practical steps lawyers should take to prevent mishap. For a start, never believe anything you learn from an AI unless you have satisfied yourself as to the legitimacy of any sources. Next, be candid about your use of AI (I promise I wrote this article myself!), and clear with clients in your letter of engagement about when and how you will use AI in respect of their data. And finally, remember that no AI will ever replace the genuine human connection between lawyer and client. There will only ever be so much AI can do.

Written by Lindsay Ogunyemi and Harriet Tooley

Further Reading