Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Reasoning, tool calling, and agentic systems in legal LLMs

· 7 minute read

· 7 minute read

Domain-specific AI provides accuracy and reliable legal reasoning

← Blog home

Transforming your law firm’s workflow can seem like both a promise and a risk. If your workflow works, why change it? Many lawyers are now seeing that generative artificial intelligence (GenAI) has the power to make legal work more efficient and accurate. Recent research shows a large majority of legal professionals (89%) see use cases for GenAI within their work. As GenAI tools become more sophisticated, firms that leverage them effectively will gain a competitive edge in delivering legal services.  

GenAI starts with a large language model (LLM). At a panel hosted at the British Legal Tech Forum 2025 called “Legal Language Models and Future AI in the Legal Profession,” experts gathered to share insights about the power of LLMs and how forward-thinking lawyers can use these tools to improve their practice.

Alexander Kardos-Nyheim, Senior Director, Thomson Reuters Labs
Dr. Andrew Fletcher, Director, AI Strategy & Partnerships, Thomson Reuters Labs
Felix Steffek, Principal Legal AI Advisor, Thomson Reuters; Professor of Law, University of Cambridge

 

 

 

 

 

 

Jump to ↓

Legal-specific LLMs matter


Integrating advanced features


Predicting court outcomes and better contract drafting


Trusting technology

An LLM is a type of AI built for processing natural language tasks, including text generation, translation, and summarization. These models undergo extensive training on vast text datasets using machine learning techniques, allowing them to comprehend and generate human-like responses.  

Legal large language models are trained on domain-specific datasets that offer significant advantages over general-purpose models in terms of accuracy, trust, and nuanced reasoning.  

“There’s a lot of data and lots of understanding of legal processes and language that are not incorporated into those [general-purpose] models,” explained Andrew Fletcher, Senior Director, AI Strategy & Partnerships at Thomson Reuters. 

A model that has been trained on general public data will often miss information to intuit what is correct in a legal context. The content it generates can sound coherent but be factually incorrect. 

Conversely, a model trained on specialized legal data, like that created and curated by Thomson Reuters’ team of bar-admitted attorney editors, produces a more reliable tool. In addition, a legal-specific LLM can perform a more nuanced interpretation required in legal work. 

“Legal reasoning, as we know, is not just following a set of logical steps in a chain, it is about bringing context and understanding of the area of law,” said Felix Steffek, Professor of Law at the University of Cambridge and Principal Legal AI Advisor to Thomson Reuters. An LLM that has been trained on legal data can perform where context and the broader implications of legal decisions are essential. 

CoCounsel

CoCounsel

Bringing together generative AI, trusted content and expert insights

Meet your legal AI assistant ↗

Integrating advanced features

Agentic AI. Tool calling. While you may not be familiar with these terms, they’re already being integrated into AI systems that can change how your firm provides legal services. These capabilities allow LLMs to go beyond simple text generation, enabling them to analyze information, draw conclusions, and make informed decisions — important qualities for legal applications. 

Tool calling enables LLMs to access external information sources. As Fletcher explained, “A language model that generates text is one thing, but we’re seeing ways in which large language models can go out to get additional pieces of information or to use additional tools to do things that the language model itself is not capable of or not well placed to do.” This functionality improves accuracy and helps legal professionals retrieve comprehensive insights.  

In addition, the concept of agentic systems pushes AI capabilities even further, allowing LLMs to operate autonomously for extended periods. Fletcher described this evolution, stating, “Agentic AI is fundamentally a large language model using tools but, in a loop, so it continues to operate autonomously for longer periods of time on a task and to decide when it’s done.” These autonomous decision-making processes are expected to significantly augment legal workflows, making AI-powered legal tools more efficient and adaptable.

Agentic AI is fundamentally a large language model using tools but, in a loop, so it continues to operate autonomously for longer periods of time on a task and to decide when it’s done.

Andrew Fletcher

Senior Director, AI Strategy & Partnerships at Thomson Reuters

Predicting court outcomes and better contract drafting

Clients want to know their risks, but predicting what a court will decide can be challenging for legal professionals. One of the most transformative applications of legal AI is predicting court outcomes.  

“Knowing what the court will decide means that we will know what the legal consequences are of the actions that we are taking,” said Steffek. 

Legal AI’s predictive capabilities allow lawyers and their clients to anticipate legal risks and make informed decisions. By poring through vast datasets, AI can forecast court outcomes with a high level of accuracy. While no prediction is 100% accurate, providing clients with educated risk assessments can prove invaluable. 

Contract drafting also holds immense potential, as contracts are repetitive and time-consuming, and often contain gaps that can lead to legal complications. By integrating non-legal information, such as corporate structures and business decisions, AI can craft more reliable contracts that proactively address potential scenarios. Steffek emphasized the importance of this holistic approach, underscoring AI’s ability to improve access to justice and streamline legal processes. 

Trusting technology

Is using AI all about trust? Not exactly. Confidence in an AI tool comes from trusting its source.  

Thomson Reuters’ CoCounsel employs many of the same AI fundamental components that other developers use, but its strength lies in industry-specific expertise.  

As Fletcher explained, “We are using those same foundational building blocks, but we’re able to leverage the scientists and the engineers with lots of experience in building AI capabilities to help understand and inform the decision making that our customers need from agentic systems.” 

Additionally, our collaborations with leading AI vendors enhance the ability to refine and deploy cutting-edge technology. “By working with organizations like Google, OpenAI, and Anthropic, we get access early and test as these [technologies] are being developed to both shape them but also to enable us to rapidly interpret and make them work for the situations our customers care about,” Fletcher explained.  

Through these partnerships, we ensure that our AI systems remain both technologically innovative and highly practical for real-world applications. 

Over 17,000 U.S. law firms and legal departments are already using CoCounsel, our industry-leading AI legal assistant, as is the entire U.S. federal court system. Schedule a free consultation or demo to make your workflow more efficient today. 

E-book

E-book

Agentic AI 101: What your business needs to know

Access e-book ↗

More answers