• FR
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

Generative AI and confidentiality: Preserving legal professional privilege in a rapidly evolving landscape

19 March 2026

Generative AI tools are now firmly embedded in everyday business life. From summarising documents to analysing issues and capturing meeting notes, these technologies offer clear efficiency gains. They are also increasingly used by organisations and individuals when grappling with legal issues.

However, the use of generative AI raises a number of important and sometimes under‑appreciated questions, one of which is: what happens to legal professional privilege when confidential or privileged material is shared with AI tools? Whilst this is an area in which the law is likely to develop quickly, recent guidance and case law, from the United States and England, underscore the need for caution.

Whilst this general conclusion may not come as a surprise to members of the legal community, it may be news to many operating in commercial roles who may routinely deal with disputes, interact with lawyers, and handle privileged and otherwise sensitive information. 

This article provides an update on the current position and sets out practical steps organisations can take to manage risk.

Privilege remains anchored in confidentiality

Under English law, legal professional privilege is a cornerstone of the justice system. It protects confidential communications where certain conditions are met, most commonly through:

  • Legal advice privilege, which covers confidential communications between a client and its lawyers for the dominant purpose of seeking or giving legal advice; and
  • Litigation privilege, which can extend more broadly to confidential communications with third parties where litigation is in reasonable contemplation and the dominant purpose of the communication is the conduct of that litigation.

A critical and common requirement for both types of privilege is confidentiality. If confidentiality is lost, privilege is likely to fall away.

Why generative AI tools create new risks

Generative AI tools are not lawyers. Communications with them are therefore not, of themselves, confidential communications between lawyer and client for the purposes of the application of legal advice privilege.

The more difficult issue arises where existing privileged material, such as legal advice, draft pleadings, or litigation strategy is uploaded to an AI tool. Many widely used public AI platforms operate under terms that allow providers to store, analyse, or reuse user inputs, including for model training. Even where that risk is theoretical rather than realised, it creates a real difficulty in demonstrating that confidentiality has been preserved.

UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC)  

In a recent decision of the Upper Tribunal (Immigration and Asylum Chamber), the tribunal considered the implications of legal advisers using open‑source generative AI tools. In that context, the tribunal indicated that uploading confidential material into publicly available AI platforms may be treated as placing that information into the public domain, with the result that client confidentiality is lost and any associated claim to legal professional privilege may fail.

Beyond this decision, there remains limited reported English authority directly addressing the privilege implications of AI use. As a result, organisations and advisers must navigate this area against a background of legal uncertainty, informed primarily by existing principles of confidentiality rather than settled case law.

Recent US litigation: US v Heppner No 25 Cr 503 (SDNY)

In early 2026, a US federal court held that a defendant’s communications with a publicly available generative AI tool (Claude) were not protected by attorney‑client privilege or work product protection (similar but not identical to English litigation privilege). The court’s reasoning focused on two familiar concepts: the absence of a lawyer‑client relationship and the loss of confidentiality caused by sharing material with a third‑party AI provider.

That decision does not change English law, nor do US concepts of privilege map neatly onto those in England and Wales. In particular, the US “work product” doctrine operates differently from English litigation privilege. Under English law, litigation privilege may, in appropriate circumstances, protect documents created by a party even where no lawyer is involved, provided litigation is in reasonable contemplation and the dominant purpose of the document is the conduct of that litigation. However, the reasoning in US v Heppner is firmly rooted in principles of confidentiality that English courts also regard as fundamental. As a result, the case has been widely viewed as a warning: privilege may be at risk in circumstances where the use of generative AI is inconsistent with maintaining confidentiality.

English courts have not yet ruled directly on the privilege status of AI‑assisted outputs. That uncertainty itself creates risk, particularly in the context of disclosure arising in connection with legal disputes. 

Publicly available AI versus enterprise (i.e. private) AI

Not all AI tools present the same level of risk:

  • Publicly available tools are typically open platforms, offered on standard terms, with limited transparency around data retention, reuse and access. These tools pose the greatest privilege risk.
  • Enterprise or “closed “AI systems can, in principle, be deployed within secure, private, environments, subject to contractual confidentiality protections, restrictions on training and data reuse, and defined retention and deletion policies. 

Even with enterprise tools, maintenance of privilege may not always be automatic. First, legal advice privilege depends on a communication being between a client and its lawyers for the dominant purpose of seeking or giving legal advice. Where AI is used by non‑lawyers to analyse legal issues or generate “advice‑like” outputs, privilege may never arise, regardless of how private or secure the platform is.

Second, confidentiality remains critical. Whilst enterprise AI tools can be used within secure environments, privilege will depend on how data is actually handled in practice. Issues such as data retention, internal access controls, onward sharing and whether outputs are reused or repurposed can all affect whether confidentiality is maintained.

Finally, whether privilege applies is often highly fact‑sensitive. Even within a private AI environment, the wider circulation of AI‑generated outputs, particularly beyond the legal team or to insurers, advisers or other stakeholders, may undermine claims to privilege if not carefully controlled.

These are all matters that the Courts will need to address in the future. Currently, using AI as a substitute for proper legal advice carries inherent risk and may fall outside the scope of legal professional privilege or result in privilege loss.

Practical risk areas to watch

A number of high‑risk scenarios recur:

  • Copying legal advice or privileged correspondence into AI tools for summarisation or “sense‑checking”; particularly where the tool is publicly available or where outputs generated by an enterprise AI tool are subsequently shared beyond the core legal and client teams;
  • Using AI tools or notetakers to record or transcribe calls where legal issues are discussed; especially where recordings or transcripts are stored centrally, accessible beyond the immediate participants or processed by third-party providers without adequate protections;
  • Non‑lawyers using AI to analyse legal risk before involving legal teams; and
  • Sharing AI‑generated outputs widely within organisations or across insurers, advisers and other stakeholders without adequate controls.

Each of these scenarios may give rise to arguments that privilege has been waived, or that it never arose in the first place.

Practical steps for organisations

Whilst the law continues to develop, organisations can take sensible, proportionate steps now:

  1. Treat publicly available AI tools as not confidential and avoid inputting privileged or sensitive legal material into them altogether. 
  2. Educate your teams, particularly non‑legal teams and senior management, on privilege risks associated with AI use.
  3. Implement clear AI usage policies aligned with confidentiality, data protection and litigation risk.
  4. Conduct due diligence on any enterprise AI tools, focusing on data handling, retention, training and contractual protections.
  5. Involve lawyers early where legal issues arise and ensure AI is used to support, not replace, legal advice.

Looking ahead

Generative AI is not going away, and nor should it. Used appropriately, it can deliver significant benefits. But privilege remains fragile and courts are likely to scrutinise AI‑assisted workflows closely where confidentiality is in doubt.

A useful rule of thumb is this: publicly available AI tools should be treated as external recipients. If general disclosure would concern you in that context, it should concern you here.

Beyond that, clearly, there are many more issues with the status of information placed into AI tools than this article has not addressed and which involve a number of other uncertain and untested legal issues. These include, for example, the risk of data leakage from public AI tools in response to certain prompts; the potential for secondary evidence of privileged material to arise even where the original document is never disclosed; and the possibility that a party’s use of AI tools itself becomes the subject of speculative or “fishing” disclosure applications.

We can expect that future disputes will involve many challenges to the privilege status of materials that have been run through AI models and that questions around AI usage / policies will arise in the disclosure stages of litigation and arbitration. 

DWF acts for clients in disputes where the use of generative AI gives rise to issues around privilege, confidentiality and disclosure. If you would like to discuss how these issues may arise in current or anticipated disputes, please contact our experts.

Thanks to Sarah Deloison for contributing to this article. 

Further Reading