Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

AI and the practice of law: Major impacts to be aware of in 2024

· 7 minute read

· 7 minute read

How AI is transforming the legal industry for lawyers, the courtroom, consumers, education, and the future of law practice.

Jump to ↓

Generative AI’s impact on lawyers

AI in the courtroom

Risks and ethical implications

Impact on consumers and clients

Future outlook

 

Motion graphic of TR website messaging on laptop screen— Transforming tomorrow's/today's work with AI

 

Generative AI’s impact on legal work

Generative AI (GenAI) assistants will become indispensable to practically every lawyer.

The use of GenAI can automate routine tasks, such as creating a solid first draft of a brief, a contract based on a set of facts, or an RFP response. Even with a human checking and editing the draft, the time needed can be greatly reduced. This frees up time to add value with skills only humans can provide – analyzing results, strategic thinking, and advising clients.

GenAI can be much more creative and strategic too, such as “anticipating the arguments the other side will bring to the table, and doing so via different personas you ask the GenAI to adopt,” says Sterling Miller, Senior Counsel for Hilgers Graben, PLLC.

The sheer versatility of GenAI brings different opportunities for different-sized firms. Solo attorneys can cut significant time from transactional and litigation matters. Small firms have more freedom to experiment and explore new practice areas. And global law firms can drive cost savings through automation at scale.

 

AI in the courtroom

In June 2023, an attorney filed a brief written with the help of ChatGPT, citing legal cases supporting his client’s position. But as the judge discovered, six of those cases didn’t exist. They were hallucinations.

Large language models (LLMs) “can provide incorrect answers with a high degree of confidence,” says Rawia Ashraf, VP, Product, Legal Technology at Thomson Reuters.

The 2024 Generative AI in Professional Services Report found that government legal departments and courts are generally more wary about new technologies. The survey shows that 31% of those working in courts expressed concern, making it the most common reaction among them, compared to 26% who felt hesitant. Additionally, only 15% of court respondents felt excited about these technologies, which is the lowest percentage of excitement reported in any job segment included in the survey.

“Courts will likely face the issue of whether to admit evidence generated in whole or in part from GenAI or LLMs, and new standards for reliability and admissibility may develop for this type of evidence,” says Rawia Ashraf.

However, courts may simply be trying to figure out where the technology fits into the modern court system. “I have used ChatGPT and other AI programs, and it saves time and levels the playing field for certain tasks and professions, but [it’s] also a bit dangerous in a sense that if it gets censored, it will inevitably be biased,” said one US judge. “But for computing and translating data, I think it is amazing.”


Rawia Ashraf

VP, Product, Legal Technology at Thomson Reuters

 

Risks and ethical implications

Many professionals including those working in court systems may think of GenAI as public-facing tools like ChatGPT. But using these models carry a substantial degree of risk.

“AI is not a monolith,” states Laura Safdie. “There is a huge difference between consumer AI and legal AI like CoCounsel which uses only reliable and verifiable sources of data. Its knowledge base is your firm’s or your client’s data. And that data is never going to be used to train a third party LLM.”

While GenAI will continue to impact legal cases, the technology in all its forms presents much more of an opportunity than a risk to legal practice.

The utility of GenAI will increase, and the risks dramatically reduce, the more firms adopt trusted, purpose-built legal tools — rather than experimenting with public-facing ones, however sophisticated.

Find out more about the ethics of LLMs and GenAI, and how a trusted AI assistant could help from our blog posts.


Laura Safdie

Vice President, Artificial Intelligence GTM & Global Affairs Lead, Thomson Reuters

 

Impact on consumers and clients

With the advent of natural language processing (NLP) – conversing with AI – basic legal information may become increasingly accessible to people. Potential clients might seek legal information using AI, which could help prepare questions to ask their lawyer.

But large corporations, who tend to be the clients of large global firms, have their own legal teams with 60% of them having someone dedicated to tech. “Clients want big firms to use technology. They also don’t want to pay firms to handle things they can do themselves,” says Zach Warren, Manager, Technology and Innovation, Thomson Reuters Institute.

This will be in a constant state of change for years to come.

 

Future outlook

The legal profession is not going to be destroyed by AI, as Sterling Miller explains. “Lawyers must validate everything GenAI spits out. And most clients will want to talk to a person, not a chatbot, regarding legal questions.”

But training will change. Lawyers will always need to do some junior work, for example to understand the process of legal research or to write a good brief. But when certain tasks can be automated, the question is, how much?

Law school curriculums will need to keep pace with changes in technology law, ethics, and data science.  Law schools are increasingly incorporating AI into their curriculums, with more than half now offering AI-related courses, spurred by the industry’s growth and employer demand. Institutions like Arizona State University and the University of California, Berkeley are introducing specialized programs and degrees in AI to prepare lawyers for using AI in their practices and advising clients on AI-related matters.


Sterling Miller

CEO and Senior Counsel, HILGERS GRABEN PLLC

As law schools try to meet the demands of a booming industry, the focus on applied research in AI and emerging technologies is also expanding. This trend is exemplified by initiatives like Thomson Reuters Labs, which is dedicated to the research, development, and application of AI and other emerging technologies. Thomson Reuters Labs actively contributes to the broader academic and professional communities by publishing their findings in scientific journals and presenting at research-focused conferences and workshops.

Last year alone, Thomson Reuters published scholarly articles addressing topics such as segmenting handwritten and printed text in marked-up legal documents, the best techniques from prompting GenAI, uncertainty quantification effectiveness in text classification, and making a computational attorney.

Find out which tasks you might be able to automate or augment with GenAI, and keep up with the latest developments in legal AI by signing up for our newsletter.

 

White paper thumbnail of GenAI for legal professionals

 

More answers