Skip to content
Artificial Intelligence

Agentic AI: From statistical patterns to strategic partners

Frank Schilder  Thomson Reuters Labs

· 10 minute read

Frank Schilder  Thomson Reuters Labs

· 10 minute read

Why agentic AI represents a fundamental shift for legal professionals

Highlights

  • Agentic AI systems, with their ability to perform intricate reasoning and manage workflows, present significant risks including the potential for data fabrication. Mitigating these risks requires robust safeguards such as human oversight, verification protocols, and transparent audit trails to ensure the ethical accuracy required in law.
  • Unlike traditional AI's static pattern matching, agentic AI dynamically uses multiple tools and real-time data to augment professional workflows. It simplifies complex research and analysis to free legal experts for higher-level strategic thinking, client counseling, and novel case strategies.
  • Successful implementation requires built-in accountability through transparent logs and tracing mechanisms. The aim is not to replace lawyers but to empower them. By elevating practice standards, legal professionals are enabled to focus on uniquely human skills like judgment and complex problem-solving.

The legal profession stands at a technological inflection point. While artificial intelligence has promised innovation for years, agentic AI systems represent something fundamentally different — and legal leaders need to understand both the extraordinary potential and the critical limitations before making strategic investments.

Unlike traditional AI tools that simply process information, agentic AI systems can reason, plan, and execute complex workflows with minimal initial direction. For legal professionals managing vast document volumes, conducting intricate research, and navigating complex cases, this technology offers unprecedented capabilities. But as our conversation with agentic AI expert Frank Schilder, senior director of applied research in Thomson Reuters Labs, reveals, the path forward requires careful consideration of both breakthroughs and breaking points.

In part three of this three-part blog series all about agentic AI, we discuss insights into the practical realities of implementing agentic AI systems in legal practice including their:

  • Failure modes and critical vulnerabilities
  • Fundamental differences from traditional AI systems
  • Augmentation capabilities for professional workflows
  • Accountability and transparency requirements
  • Essential safeguards for strategic legal implementation
  • Future implications for elevating legal practice standards

Jump to ↓

The reality check: When sophisticated systems fail


Beyond pattern matching: The fundamental shift in AI reasoning


The augmentation advantage: Enhancing rather than replacing legal expertise


Building accountability in autonomous systems


Strategic implementation: Essential safeguards for legal AI


The future of legal practice


The reality check: When sophisticated systems fail

Every innovative technology has its learning curve, and agentic AI is no exception. Schilder’s most compelling insight comes from witnessing these systems at their worst — an experience that highlights important considerations for legal professionals exploring AI implementation.

“I posed a complex research question to a publicly available agentic system, seeking information on the distribution of techniques used in a professional domain across different populations,” Schilder explains. “The system performed a wide web search, compiled results, and produced beautiful bar charts using Python scripts. I was initially impressed by the results, but upon closer inspection, I noticed that the numbers presented were entirely fabricated and were not mentioned in the cited sources at all.”

The implications for legal practice are sobering. Imagine an AI system conducting case law research, producing compelling visualizations of precedent patterns, only to discover the underlying data was completely fabricated. The system even admitted, when confronted, that the figures were “illustrative or hypothetical” rather than evidence-backed.

This failure illuminates a critical vulnerability: “LLMs used in agentic systems are designed to please users, which can lead to hallucinations if not properly checked,” Schilder notes. For legal professionals, where accuracy isn’t just preferred but ethically mandated, this represents a fundamental challenge that must be addressed through robust system design.

Related blog

Related blog

Why legal professionals need purpose-built agentic AI

Read now ↗

Beyond pattern matching: The fundamental shift in AI reasoning

What makes agentic AI different from the chatbots and document review tools legal professionals might already use? The answer lies in how these systems approach problem-solving — a distinction that could reshape legal practice.

“Agentic AI represents a transformative shift in the way we approach problem-solving and decision-making, because the agentic nature of the recent systems better model what a professional workflow looks like,” Schilder explains. “They are often combining different tools in order to generate a work product.”

Traditional AI systems, built on large language models (LLMs), rely primarily on pattern recognition and statistical prediction. They’re essentially sophisticated autocomplete systems, drawing from training data that becomes outdated the moment training stops. “Previous systems solely based on LLMs only rely on next token prediction and their ‘knowledge’ they have been trained on,” Schilder notes.

Agentic AI systems, by contrast, can access real-time information, use multiple tools simultaneously, and reason through complex, multi-step processes. This is similar to a legal professional researching a case, consulting multiple databases, analyzing precedents, and synthesizing findings into actionable recommendations.

“In contrast, agentic AI draws from data and tools that professionals work with every day, producing reliable answers that are grounded in reality,” Schilder observes. For legal professionals, this means AI systems that can navigate the same research databases, legal repositories, and analytical tools they use daily, but with the ability to process information at unprecedented scale and speed.

One of the most persistent concerns among legal professionals is whether AI will replace human lawyers. Schilder’s perspective offers a more nuanced, and optimistic, view of the technology’s role.

“One common misconception about agentic AI is that it will replace human professionals entirely,” he explains. “However, in reality, agentic AI systems are designed to augment professional workflows by providing accurate and reliable information, but they still require human judgment and expertise to interpret the results correctly.”

This augmentation model has precedent in other professional domains. Schilder draws an intriguing parallel: “Similar to the Go players who have become more creative in their strategies after AlphaGo defeated one of the world’s best Go players in 2016, professionals should see this technology as an augmenting technology that can help them grow in their profession.”

For legal practice, this suggests that AI won’t simply make lawyers more efficient. Instead, it could elevate the entire profession’s capabilities. “What really sets agentic AI apart is its ability to augment professional workflows, rather than simply replacing them,” Schilder notes. “By streamlining workflow, aggregating information fast and accurately, agentic AI can help professionals work more efficiently and effectively.”

The implications are profound: legal professionals equipped with agentic AI might discover new approaches to case strategy, uncover previously hidden patterns in legal precedents, or identify novel arguments that human analysis alone might miss.

2025 Future of Professionals Report

2025 Future of Professionals Report

Survey of 2,275 professionals and C-level corporate executives from over 50 countries

View report ↗

Building accountability in autonomous systems

For legal professionals, accountability is fundamental to professional responsibility. As agentic AI systems make increasingly autonomous decisions, establishing clear accountability frameworks becomes critical.

“To ensure accountability and auditability, it’s crucial to maintain logs of agent activities and provide transparent tracing mechanisms,” Schilder emphasizes. “These records can be invaluable in identifying potential failures or discrepancies and help us understand how the AI arrived at its conclusions.”

This transparency requirement aligns perfectly with legal practice standards, where documenting reasoning and maintaining clear audit trails are already essential. However, Schilder anticipates that regulatory frameworks will need to evolve: “I expect more regulations in this respect because we know from experiments with the latest frontier models that models under certain circumstances are able to lie and gaslight the users.”

The comparison to aviation safety is particularly relevant: “It also makes sense from a business perspective, and we have seen similar developments in other industries such as air travel. Nobody wants to fly in an airplane knowing that the machine hasn’t been properly maintained or that the pilots are not well trained.”

The failure Schilder witnessed offers crucial lessons for legal professionals implementing agentic AI systems. The key insight: “This failure taught me a crucial lesson about the importance of guardrails, grounding in actual data, and self-reflection in agentic AI systems.”

For legal applications, these safeguards must be built into every system deployment:

  • Verification protocols: Every AI-generated finding must be traceable to verified sources, with clear documentation of the reasoning process.
  • Human oversight: Critical decisions require human review, particularly in high-stakes legal matters where accuracy is paramount.
  • Transparent limitations: Systems must clearly communicate when they’re operating beyond their reliable capabilities or when data is insufficient.
  • Continuous monitoring: Regular audits of AI outputs against known correct answers help identify drift or degradation in system performance.

Schilder’s vision for agentic AI in professional practice extends beyond simple efficiency gains: “This development is just the beginning, and the strategic thinking and analytical skills of professionals will become even more important.”

For legal professionals, this suggests a future where AI handles routine research, document analysis, and information synthesis, freeing lawyers to focus on higher-level strategic thinking, client counseling, and complex problem-solving. The technology promises not just to make legal work faster or cheaper, but to enable entirely new approaches to legal practice.

“We’re no longer just talking about machines doing tasks faster or cheaper — we’re talking about machines that can genuinely help us solve complex problems and make better decisions,” Schilder concludes.

As legal professionals navigate this transformation, the key lies in understanding both the extraordinary potential and the critical limitations of agentic AI. By implementing appropriate safeguards, maintaining human oversight, and focusing on augmentation rather than replacement, the legal profession can harness this technology to elevate practice standards while preserving the judgment, ethics, and strategic thinking that remain uniquely human.

The future of legal practice isn’t about lawyers versus AI. It’s about lawyers empowered by AI to deliver unprecedented value to their clients and their profession.

To learn more about agentic AI and how it can help legal professionals optimize their workflow, watch our webinar on demand.

Webinar

Webinar

Agentic AI for legal professionals: What it means & when to expect it

Watch on demand ↗

More answers