Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Generative AI in legal tech: Balancing innovation with human oversight

· 5 minute read

· 5 minute read

Generative AI is a powerful tool for legal professionals - one that requires human oversight.

It’s nearly inescapable. The transformative impact of artificial intelligence (AI) is everywhere. From the curated programs Netflix suggests for you, to your smart refrigerator telling you when to buy more milk, to the online ads for things you didn’t even know you wanted, AI has become ubiquitous in our lives.  

The same applies to the legal profession. Artificial intelligence, particularly generative AI, is all the buzz in the blogs, conferences, meeting rooms, and water-cooler conversations of legal professionals. It comes filled with possibilities – and reservations. Both expectations hinge on human oversight. 

Jump to:

icon-orange abcs   Understanding AI

  The good, the bad, and the technical

icon-speaking bubble
  Human oversight

  Will you make generative AI standard practice?

 

Understanding AI 

By its simplest definition, AI is technology that allows a computer to operate in a “human” way. It takes in data from its training and its surroundings and returns information based on what it learns or senses. Many terms contribute to the lexicon of AI, but generative AI creates the most noise. Generative AI is a type of machine learning that creates drafts of content like documents, images, songs, and art in response to prompts from the user. These tools comb through large volumes of content to create “the best” response to the prompt. Rather than extracting data, as AI can do in things like contract management software, it generates data, like drafting a brief. ChatGPT is one of the best-known types of generative AI. 

 

The good, the bad, and the technical 

A recent survey of law firm professionals concluded that a large majority of those surveyed (82%) believe that generative AI can be readily applied to legal work. However, a slight majority (51%) felt that generative AI should be applied to legal work. That dichotomy suggests some hesitation in using >generative AI in legal technology for the high-stakes work of lawyers. Here are the factors to consider: 

Hallucinations

No, you’re not seeing things. But your chatbot might be generating them. Hallucinations are when generative AI creates false information. In a 2023 example, a US personal injury attorney used a chatbox to help prepare a court filing that contained six made-up cases. When his opponent and the judge discovered his error, the mortified attorney apologized that he didn’t know a chatbox could falsify information. Generative hallucinations can appear credible because AI is designed to create coherent text. 

Bias

Bias is a human function and since humans train the algorithms that AI uses, those biases can be passed along. Without proper protocols in training, including diversity in the training team, unconscious bias can be baked into machine learning models. 

Transparency

You don’t have to know how to cook a fancy French recipe to appreciate its deliciousness. And you don’t have to know how AI works to utilize its benefits. But you should know why an algorithm has arrived at the decision it provides. Transparency means the AI vendor you choose should be able to explain how their system works, what it is capable of, and where the data comes from.  

 


 

Technology background of the abstract computer motherboard More of the latest insights

Read more about evolving attitudes of generative AI in the legal industry.

Access the full report

 


 

Human oversight 

Human expertise is the silver bullet of artificial intelligence. Despite what science fiction tells us, machines are machines, and they don’t operate unless programmed and instructed by humans. Human expertise is critical on both sides of the AI equation: user and vendor. 

User expertise

A continuing concern for legal professionals is that AI will replace lawyers. Not so. In the definition above, “generative AI creates drafts of content,” the emphasis should be placed on drafts. Regardless of what AI generates, it needs human eyes on it to ensure that what has been created is accurate and pertinent to the prompt. In the same way an attorney would oversee the work of an entry-level employee, monitoring and managing AI is a good business practice. This oversight plays a crucial role in minimizing risks, maintaining control, and harnessing the benefits of AI in a responsible manner. 

Vendor expertise

It all comes down to trust. The personal injury lawyer trusted ChatGPT to create court documents for his case, and it failed him miserably. For an industry-specific AI platform, industry-specific expertise is important to establish trust. In the legal world, few organizations compare to the industry expertise of Thomson Reuters. With more than 500 bar-admitted attorney-editors, and a team of cutting-edge data scientists, Thomson Reuters brings human intelligence into its artificial intelligence solutions. When training generative AI legal technology, a specific data set beats a broad data set. You don’t want data from Wikipedia, Twitter, or even Google; you want accurate, targeted information that’s been curated by professionals. Thomson Reuters brings that, in addition to proprietary content from products like Westlaw Precision and Practical Law. 

 

Will you make generative AI standard practice?

The coming of generative AI in legal technology is a given. But when legal professionals decide to make it a standard practice remains a question. The efficiencies it brings by creating drafts need to be quantified with regard to the time it takes to review and monitor the results. By replacing time spent on monotonous tasks with ones utilizing legal expertise, organizations will see an increase in profits and employee satisfaction. 

 


 

Abstract neon lights into digital technology tunnel.
Charting the Course for the Future of Generative AI

See what Thomson Reuters President and CEO Steve Hasker has to say on the buzz of generative AI.

Read blog post

 


 

More answers