How AI is transforming the legal industry for lawyers, the courtroom, consumers, education, and the future of law practice.
The rise of generative AI (GenAI) has been one of the most transformative events in the legal industry in the past few years. What was once a theoretical “next generation” technology became, upon the rollout of large language models (LLMs), a new reality for many legal professionals.
GenAI has the potential to change — in some cases radically — how legal professionals do their jobs in the years to come. Law firms using GenAI systems are already posting greater efficiencies in legal research and document management. These systems are on course to become an essential piece of a law firm’s due diligence efforts, enhancing its drafting capabilities, and becoming a valued “legal assistant” for a host of other functions.
A firm using GenAI technology is positioned to work faster, more comprehensively, and at a lower cost to clients than its less tech-savvy competitors.
Jump to ↓
Generative AI’s impact on legal work |
AI in the courtroom |
Risks and ethical implications |
Impact on consumers and clients |
Future outlook |
![]() |
Generative AI’s impact on legal work
Generative AI (GenAI) assistants will become indispensable to practically every lawyer.
The use of GenAI can automate routine tasks, such as creating a solid first draft of a brief, a contract based on a set of facts, or an RFP response. Even with a human checking and editing the draft, the time needed can be greatly reduced. This frees up time to add value with skills only humans can provide – analyzing results, strategic thinking, and advising clients.
GenAI can be much more creative and strategic too, such as “anticipating the arguments the other side will bring to the table, and doing so via different personas you ask the GenAI to adopt,” says Sterling Miller, Senior Counsel for Hilgers Graben, PLLC.
The sheer versatility of GenAI brings different opportunities for different-sized firms. Solo attorneys can cut significant time from transactional and litigation matters. Small firms have more freedom to experiment and explore new practice areas. And global law firms can drive cost savings through automation at scale.
AI in the courtroom
In June 2023, an attorney filed a brief written with the help of ChatGPT, citing legal cases supporting his client’s position. But as the judge discovered, six of those cases didn’t exist. They were hallucinations.
Large language models (LLMs) “can provide incorrect answers with a high degree of confidence,” says Rawia Ashraf, VP, Product, Legal Technology at Thomson Reuters.
Thomson Reuters’ Generative AI in Professional Services Report found that government legal departments and courts are generally more wary about new technologies. The survey shows that 31% of those working in courts expressed concern, making it the most common reaction among them, compared to 26% who felt hesitant. Additionally, only 15% of court respondents felt excited about these technologies, which is the lowest percentage of excitement reported in any job segment included in the survey.
“Courts will likely face the issue of whether to admit evidence generated in whole or in part from GenAI or LLMs, and new standards for reliability and admissibility may develop for this type of evidence,” says Rawia Ashraf.
However, courts may simply be trying to figure out where the technology fits into the modern court system. “I have used ChatGPT and other AI programs, and it saves time and levels the playing field for certain tasks and professions, but [it’s] also a bit dangerous in a sense that if it gets censored, it will inevitably be biased,” said one US judge. “But for computing and translating data, I think it is amazing.”
|
Rawia Ashraf
VP, Product, Legal Technology at Thomson Reuters
Risks and ethical implications
Many professionals including those working in court systems may think of GenAI as public-facing tools like ChatGPT. But using these models carry a substantial degree of risk.
“AI is not a monolith,” states Laura Safdie, Vice President, Artificial Intelligence GTM & Global Affairs Lead, Thomson Reuters. “There is a huge difference between consumer AI and legal AI like CoCounsel which uses only reliable and verifiable sources of data. Its knowledge base is your firm’s or your client’s data. And that data is never going to be used to train a third-party LLM.”
Many legal professionals have reported being concerned that GenAI usage could put them or their firms in violation of ethical and professional codes of conduct. Currently, there are no clear industry-wide and fully-agreed-upon guidelines as to how to proceed. What lawyers have to do at present is monitor developments from the state to international level to get a sense of how ethical codes of conduct are evolving in the AI era.
For example, the California State Bar’s Practical Guidance for the Use of Generative AI in the Practice of Law guidelines include:
- Warnings against perceived “over-reliance” on AI tools for research and analysis, and the need for lawyers to not outsource “professional judgement” to AI systems
- Differentiating between what a lawyer could charge a client (e.g., time spent refining information derived from AI-powered systems) and what they should not charge (e.g., time spent on internal AI training).
State bars in New York and Florida, among other states, have issued releases listing similar concerns. The ABA also issued a Formal Opinion which notes that:
- Lawyers need to understand the benefits and risks of using GenAI systems
- Lawyers should be cognizant of the need to keep all client information confidential when using GenAI
- Lawyers should charge “reasonable” fees and expenses when using GenAI systems
It may come down to each law firm creating its own ethical playbook for using GenAI. So far, the industry has a rough consensus that AI should not be used as a final arbiter of any legal action but is fine to use for clerical duties and as a research aid, for example.
In a recent Thomson Reuters survey, respondents said they felt secure about using GenAI tools for question-and-answer services or administrative tasks. 72% of respondents said they felt that GenAI should be applied to non-legal work within a firm — over 20% more than those who said they felt it should be applied to legal work within the firm.
While GenAI will continue to impact legal cases, the technology in all its forms presents much more of an opportunity than a risk to legal practice.
The utility of GenAI will increase, and the risks dramatically reduce, the more firms adopt trusted, purpose-built legal tools — rather than experimenting with public-facing ones, however sophisticated.
Find out more about the ethics of LLMs and GenAI, and how a trusted AI assistant could help from our blog posts.
|
Laura Safdie
Vice President, Artificial Intelligence GTM & Global Affairs Lead, Thomson Reuters
Impact on consumers and clients
GenAI has a wide array of uses in the legal world. Among these are:
- Research: AI systems can plumb databases and scour hundreds of uploaded documents in minutes.
- Summarization: AI tools can condense lengthy documents into two-paragraph summaries, allowing lawyers to assemble and rapidly integrate a set of core information from a vast collection of printed and digital matter.
- Analyses: AI-powered tools help a lawyer easily locate whatever terms they’re searching for. The systems are trained to look for inconsistencies in data, potential errors, any notable gaps in documentation, and any non-standard language in a contract, for example.
- Compliance and risk management: AI systems can monitor regulatory changes and update vital information in real-time, ensuring that a lawyer can keep clients in compliance with all applicable regulations and ethical standards.
- Predictive analytics: By analyzing historical case data, AI can provide lawyers with a set of well-documented scenarios as to how a legal outcome could proceed, enabling lawyers to craft a compelling argument for client advisories.
Law firms also stand to reap notable time savings by using GenAI tools. A recent Future of Professionals Report found that AI could free up 12 hours per week over the next five years for professionals, or four hours per week over the upcoming year. That’s about 200 hours per person: the equivalent of adding a new colleague for every 10 team members on staff.
That said, it will still be essential for a legal provider to have vigorous peer review and fact-checking systems for any analyses that use GenAI tools. That’s because AI is only as good and as reliable as the inputs that it receives, and the technology still has potential to have “hallucinations” — inaccurate or unfounded summaries of data, for example.
It’s best to regard an AI system as “people plus technology,” says Zach Warren, manager for enterprise content for technology and innovation with the Thomson Reuters Institute. “Not as a technology that’s in any way designed to replace critical thinking in lawyering.”
With the advent of natural language processing (NLP) – conversing with AI – basic legal information may become increasingly accessible to people. Potential clients might seek legal information using AI, which could help prepare questions to ask their lawyer.
But large corporations, who tend to be the clients of large global firms, have their own legal teams with 60% of them having someone dedicated to tech. “Clients want big firms to use technology. They also don’t want to pay firms to handle things they can do themselves,” says Zach Warren.
This will be in a constant state of change for years to come.
Future outlook
The legal profession is not going to be destroyed by AI, as Sterling Miller explains. “Lawyers must validate everything GenAI spits out. And most clients will want to talk to a person, not a chatbot, regarding legal questions.”
But training will change. Lawyers will always need to do some junior work, for example to understand the process of legal research or to write a good brief. But when certain tasks can be automated, the question is, how much?
Law school curriculums will need to keep pace with changes in technology law, ethics, and data science. Law schools are increasingly incorporating AI into their curriculums, with more than half now offering AI-related courses, spurred by the industry’s growth and employer demand. Institutions like Arizona State University and the University of California, Berkeley are introducing specialized programs and degrees in AI to prepare lawyers for using AI in their practices and advising clients on AI-related matters.
|
Sterling Miller
CEO and Senior Counsel, HILGERS GRABEN PLLC
As law schools try to meet the demands of a booming industry, the focus on applied research in AI and emerging technologies is also expanding. This trend is exemplified by initiatives like Thomson Reuters Labs, which is dedicated to the research, development, and application of AI and other emerging technologies. Thomson Reuters Labs actively contributes to the broader academic and professional communities by publishing their findings in scientific journals and presenting at research-focused conferences and workshops.
Last year alone, Thomson Reuters published scholarly articles addressing topics such as segmenting handwritten and printed text in marked-up legal documents, the best techniques from prompting GenAI, uncertainty quantification effectiveness in text classification, and making a computational attorney.
Find out which tasks you might be able to automate or augment with GenAI, and keep up with the latest developments in legal AI by signing up for our newsletter.

CoCounsel
Bringing together generative AI, trusted content and expert insights
Meet your legal AI assistant ↗