What today’s courts are learning and how they’re modernizing with the right AI approach
Courts across the country are adapting to an entirely new reality: AI is no longer optional, and its impact shows up daily in filings, hearings, and judicial workloads. In the recent Thomson Reuters webinar “Real talk: AI reliability and hallucinations in the courts,” panelists discussed what’s happening inside real courtrooms and how judicial leaders are rethinking their approach to AI.
Featuring Rabihah Butler, Esq., Amanda Soczynski, JD, and Hon. Debra McLaughlin, the conversation highlighted a crucial shift: AI is already here, but the type of AI courts choose determines whether it becomes a risk or a value‑adding tool.
Jump to ↓
Why courts are experiencing more AI‑generated errors
What judges are seeing in court
A new norm: “Do not trust until verified”
AI as a thought partner, not a decision‑maker
Where courts are seeing real value and fewer risks
Guidance for courts evaluating their next step
Why courts are experiencing more AI‑generated errors
Generative AI tools predict text based on patterns, not truth. As Rabihah Butler explained, this is precisely why AI hallucinations occur—when a system confidently produces:
- Fabricated cases
- Incorrect or outdated citations
- Misstated statutes or rules
- Entirely fictional legal authorities
These errors aren’t malicious. They’re simply the result of an AI tool trying to be helpful without understanding legal accuracy or responsibility.
For courts, this creates a new kind of workload as documents appear polished while hiding critical flaws.
What judges are seeing in court
Judge Debra McLaughlin shared that filings from self‑represented litigants have changed dramatically. Instead of handwritten motions with sparse citations, many courts now receive well‑formatted documents that contain confidently asserted but incorrect legal authorities.
This creates challenges:
- More time spent validating every citation
- Additional hearings to educate filers about verification
- The risk of unintentionally sanctioning individuals who relied on faulty AI tools
- Increased burden on opposing counsel
To manage this, some judges are temporarily pausing cases, setting clarifying hearings, and guiding pro se litigants on how to verify what they submit. This reflects a growing understanding that the issue is unverified AI.
A new norm: “Do not trust until verified”
Initially, many courts approached AI similarly to evidence: trust it, but verify it.
As Butler shared, that standard has shifted. Judges and legal professionals now emphasize: Do not trust AI output until it has been verified.
This mindset protects court processes and encourages disciplined AI use. It also naturally leads many offices to ask: Which tools make verification easier? That question often begins the exploration phase of a modern AI strategy.
AI as a thought partner, not a decision‑maker
Across all interviews, judges made one point clear: AI should support, not replace, human judgment.
Judges are using AI most effectively for:
- Draft refinement and clarity
- Issue spotting
- Understanding expert domains
- Pattern identification in documents
- Scheduling and administrative triage
These are meaningful time‑savers that don’t compromise judicial discretion. They also offer a low‑risk entry point for courts beginning to evaluate AI as part of their operational modernization.
Where courts are seeing real value and fewer risks
Court leaders in the webinar pointed to several ways professional‑grade tools make a practical difference:
- Citation validation tools reduce time spent verifying accuracy.
- Document analysis tools catch misquoted passages or invented language.
- Integrated legal research ensures recommendations draw from reliable authorities.
- Privacy‑first design offers peace of mind when dealing with sensitive case information.
Courts adopting these capabilities are finding that AI doesn’t add work but removes friction that has existed for years.
Guidance for courts evaluating their next step
If your court or judicial office is exploring AI adoption, the panelists offered two strong takeaways:
Start with small, low‑risk tasks
Try AI with familiar topics and compare the output to trusted sources. This helps staff understand where it adds value and where verification is needed.
Maintain human judgment at every step
AI can illuminate patterns and streamline information processing, but it cannot replace judicial reasoning. Maintaining human oversight at every step ensures the integrity of legal decisions while still benefiting from meaningful productivity gains.
AI can significantly reduce workload—if courts choose tools designed for legal reliability. Hallucinations remain an inherent limitation of general‑purpose AI, but professional‑grade legal AI mitigates those risks by grounding its responses in vetted, authoritative sources. For courts seeking modernization, reduced administrative burden, and greater accuracy, selecting the right AI system becomes the difference between disruption and measurable value.
To dive deeper into evaluating AI reliability and minimizing risk in judicial environments, explore the Real Talk: AI Reliability and Hallucinations in the Courts webinar.
