Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

New webcast explores intersection of generative AI and the legal industry

· 5 minute read

· 5 minute read

There is significant interest in these tools from consumers and broad interest from companies across industries, fueling its growth throughout 2023 and leading to mass market adoption.

Jump to:

  What is generative AI?

bulb icon   How does generative AI support legal work?

pennant icon
  Risks and how to overcome them

  Thomson Reuters’ plans for generative AI in the legal industry

 

Legal professionals are continually finding new ways to make artificial intelligence (AI) work for them. Generative AI – those large language models that can create new content based on a prompt from a user – has been in widespread use for nearly a year, since the release of ChatGPT.  

Legal professionals have taken a particular interest, according to Zach Warren. Warren is the manager of Technology and Innovation at the Thomson Reuters Institute. On the recent webcast Making sense of the intersection between AI and legal professionals,” he noted that a full 82 percent of legal professionals believe generative AI can be used for legal work. That’s a significant number for a profession that sometimes moves slowly with new technology.  

Warren was joined on the webcast by Zena Applebaum, VP of Product Marketing for Thomson Reuters, and Andrew Fletcher, Director of AI Strategy & Partnerships for Thomson Reuters Labs. 

Zena Applebaum
Zena Applebaum, VP, Product Marketing, Thomson Reuters
Andrew Fletcher
Andrew Fletcher, Director, AI Strategy and Partnerships, Thomson Reuters Labs
Zach Warren
Zach Warren, Manager, Technology and Innovation, Thomson Reuters Institute

 

 

 

 

 

 

What is generative AI? 

Generative AI refers to a subset of AI that uses deep learning algorithms to generate new data, text, images, and even videos that mimic the characteristics of human-generated content. Applebaum noted that, “You can ask a question as though there were someone in the room with you, and the tech will generate new content – like a summary of bulleted list overview of a document.” ChatGPT, Bard, and CoPilot are examples of open model tools available now.  

 

Fletcher noted the distinction between automation- taking the human out of the task – and augmentation – humans using a tool to complete a task.  

“Automation is done with caution because when you’re dealing with wanting to focus on accuracy and outcomes being correct, you need to be pretty confident the automation is going to deliver what it is that you need,” he said. “It’s appropriate for low-value repetitive tasks.” 

Augmentation, on the other hand, puts tools in the hands of experts who make decisions based on what the tools tell them. He and Applebaum agreed that the tools on the market now can help augment legal work – you just don’t want to rely on their output without verifying it. 

“These tools don’t replace lawyers. They augment them and help them be more efficient. You can get to a first draft faster – but output from AI will not be your final document.” 

Warren concurred: “These are tools, not independent entities,” he said. 

 


For legal professionals just getting started with generative AI, Applebaum offered three main skills that will help them: Curiosity, prompting skills, and the ability to validate the responses. 

Watch the webcast to learn more about these skills.


 

Risks and how to overcome them 

Privacy and hallucinations are two of the main risks associated with the tools that are widely available now.  

Regarding privacy, it is sometimes the case that once you enter text – say, asking to summarize a document – the tool owns that content. It can learn from it or resurface it in other responses. This can cause real problems when you’re inputting your own work product or a client’s confidential information. 

Hallucinations are instances where the tool returns a plausible but completely made-up response, including made-up cases and facts.  

To overcome both these risks, choose tools that guarantee the confidentiality of your data and that show you where their answers are coming from. Fletcher noted that instead of a typical “black box model” of AI, Thomson Reuters uses a retrieval augmented generator (RAG) that shows users where the answers come from, so they can validate them.  

 

Applebaum is certain the legal industry will adopt generative AI tools. “No lawyer types documents on a typewriter, and most of us carry previously unimaginable computing power in our pockets,” she said. She also noted that workers from Gen Z will expect the tools they use in their personal lives to be available professionally. They believe that AI will increase the value of their skills. 

Generative AI will certainly help legal professionals work more efficiently and with even more confidence. What will it mean for job security? There’s good news, Fletcher said: “The people who embrace the change will get to focus on more interesting work.” 


The webcast “Making sense of the intersection between AI and legal professionals,” is available on-demand.

Watch it here.


 

More answers