The launch of ChatGPT at the end of 2022 kick-started a series of generative AI-based platforms that promised to solve a constellation of productivity problems for various industries. The legal industry in particular stands to make huge strides by adopting generative AI-powered workflows. Large Language Models, a class of generative AI, are good at generating human-like communication when given an input in natural language (like English).
However, before adoption, it is important to understand the problems that LLMs can solve.
Imagine you have two problems. Problem A requires you to compute 12345 times 678. Problem B requires you to determine if a cottage in Scotland is comfortable or not. You have access to a calculator and (please suspend disbelief for this exercise) no internet. You can, however, communicate with residents of the said cottage (see note on suspension of disbelief). You can ask them questions via an old-fashioned landline phone and they will respond. You do not know much about their background. It is December.
It is easy, of course, to calculate the multiplication question using the calculator. You can also verify the result by doing a back-of-the-envelope calculation manually. It is trickier to answer the second question. However, you have some background knowledge (the cottage is in Scotland, it is December) and you can quiz the residents over the phone. Using a series of well thought out questions (is the cottage heated, is the heating working, how old is the cottage, does it need major repairs) you can hopefully reach a conclusion on whether the cottage is comfortable or not.
Lawyers deal with a wide variety of client problems every day. Some problems are like Problem A where a well-defined question leads to a single precise answer. Far more problems are like Problem B, where you have some background knowledge, some incomplete information and some knowledge gathering tools to reach a robust solution, often after a series of steps involving research, drafting, re-drafting, analysis and proofreading. Large language models (LLMs) can be useful for solving Problem B.
A LLM is a type of Artificial Intelligence algorithm trained on large datasets with a number of parameters to learn and predict human language. Quantity, quality and type of training dataset is important as it allows the model to learn about the world.
For example, if the dataset contains several books and encyclopedias on British history, the model will be able to provide useful responses when quizzed on British history but would stumble when asked questions about American history. Importantly, unlike a conscientious trainee, the model may not respond truthfully by stating that it cannot answer questions pertaining to American history given it has not been trained on American history.
An increasing number of LLMs trained on legal data work better for legal work. When used responsibly, these LLMs possess tremendous potential to transform legal work. However, they can also output incorrect and biased information. Human-in-the-loop workflows where a legal professional assesses the LLM response for correctness and irregularities before the response is used is key to ensuring that we harness this powerful technology with care.
Against a backdrop of increased risk and tangible time savings, legal professionals can prepare to use generative AI effectively and responsibly to deliver cost-effective results at speed.
Broadly speaking there are four use cases where LLMs can assist.
Re-drafting: It will perhaps not come as a surprise to legal professionals that one could have all the correct information in a paragraph and clause but the end product is difficult to read and understand. Perhaps it is poor grammar, meandering sentences or limited vocabulary. LLMs can help by redrafting a piece of text to make it more readable. By giving an LLM a fixed piece of text to re-draft the user is also using prompt engineering (the process of crafting prompts to get desired results when using LLMs) to force the model to focus on generating a response within narrow bounds.
Open-ended questions: Legal teams can use LLMs to generate a first set of results by posing open-ended questions. For examples, given a document or set of documents as an input, LLMs can produce a list of questions for cross-examination or highlight key provisions in a legal document. Again, the input documents act as the guardrails within which the model must operate (with varying success) to generate responses. The result will not be perfect but can save a lot of time.
Summarising documents: Legal teams can use LLMs to generate quick summaries of legal or financial documents by extracting key points from the document. Users can use citations from within the document to verify the accuracy of the information.
Generating first drafts of a legal work product: LLMs specifically trained on legal data can generate a first draft of a legal memo, client email or a table comparing provisions in two or more competing agreements. The draft will not be perfect and may contain inconsistencies. However, legal professionals can still save copious amounts of time by using the LLM to produce an initial draft that they can then crosscheck and refine.
As the market matures, we will see varied, more nuanced applications of LLMs for legal professionals. LLMs cannot take away all the workload or any of the responsibility away from a legal team. However, through a careful combination of prompt engineering, domain expertise and experimentation, LLMs can delivery significant time savings when used in the right settings.
Read more from the Transformation & Tech Series