Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

The key legal issues relating to the use, acquisition, and development of AI

· 15 minute read

· 15 minute read

Summary of risks and ethical challenges for legal professionals, and how to navigate the use of AI

Many legal software tools are incorporating large language models (“LLMs”), such as GPT-4, to enhance their performance. But litigators should know that using these models carries a substantial degree of risk. You must be careful as to which types of technologies to use, when to use them, and how to use them.

Generative artificial intelligence (GenAI) is a still-unfolding story in the legal industry. There remains a lack of clarity in many areas as to how to employ this technology while still fulfilling a litigator’s legal and ethical responsibilities.

There’s no guidebook to tell an attorney how best to do this because the technological environment is changing too rapidly. It’s up to the individual legal professional to keep abreast of developments and responsibly experiment with generative AI. Having a solid grasp of both its capabilities and key issues will be necessary to know when — or when not — to use AI.

 

Jump to:

pennant icon   Significant risks

  Ethical pitfalls of using generative AI and LLMs

  Possible procedural and substantive issues

icon-orange puzzle piece
  Key issues by practice area

 

What significant risks do GenAI and LLMs present in the litigation context?

Generative AI and its LLMs have a number of limitations and weaknesses, and further flaws of AI models could be disclosed over the next year or two. Using this technology is far from a risk-free endeavor for legal professionals.

There are two primary categories of risk for AI and LLM usage — output risk in which the information generated by the AI system proves too risky to use, and input risk — in which information that gets inputted to an AI model may itself be at risk.

 

Practice note preview for Artificial Intelligence Key Legal Issues Overview Practice Note

Artificial Intelligence Key Legal Issues: Overview

View practice note

 

Output risks

LLMs can hallucinate. That means, as Thomson Reuters Rawia Ashraf defines it, “that they provide incorrect answers with a high degree of confidence.” The popular image of AI being an artificial brain that thinks by itself is vastly inaccurate. A GPT model cannot reason as human beings do, and its knowledge about a topic is entirely owed to the data that it’s been given. Thus inadequately-prepared models may return murky, nonsensical, or flat-out wrong answers to user queries.

The possibility of AI hallucinations, combined with the relative scarcity of accurate legal domain knowledge currently found in most LLMs, make it particularly risky for litigators to rely on information produced by an LLM at present.

Certainly, the more legal-specific data that gets added to train LLMs — a process called fine-tuning — the less of a chance that a user will encounter a hallucination, and the more accurate the LLM’s information will become. But we’re far from certainty yet.

Bias

LLMs, as with any type of AI, are not pure objective engines of reason. The people who program them can be biased, and thus AI programs can be biased.

As Thomson Reuters Ashraf notes, “if biases exist in the data used for training the AI, biases will inform the content that AI generates as well. Models trained with data that are biased toward one outcome or group will reflect that in their performance.”

Here’s an example. One prominent use of AI to date is by companies seeking to automate employee screening and recruitment. AI promises to streamline these processes via sorting, ranking, and eliminating candidates with minimal human oversight. But turning over these tasks to AI systems carries potential risk. Generative AI usage will not insulate an employer from discrimination claims, and AI systems may inadvertently discriminate.

 

Models trained with data that are biased toward one outcome or group will reflect that in their performance.

Rawia Ashraf

 

How? AI tools may conduct analyses of internet, social media, and public databases, many of which contain personal information about prospective applicants that an employer cannot legally ask about on an employment application or during an interview. These include an applicant’s age, religion, race, sexual orientation, or genetic information.

Further, AI recruiting tools may duplicate and proliferate past discriminatory practices. Systems may favor applicants based on educational backgrounds or geographic locations, which in turn may skew results based on race. Incomplete data, data anomalies, and errors in algorithms may also create biased outcomes — an algorithm using data from one part of the world may not function effectively in other places.

Input risks

The greatest input risk that using LLMs presents today is a potential breach of confidentiality. If precautions aren’t taken, using LLMs could wreak havoc on attorney-client privilege and client data security obligations.

Any attorney using an LLM must ensure that the platform does not retain any inputted data, nor allow any third parties to access this data. While platform developers have begun to introduce new functionalities to address privacy issues, such as allowing users to turn off chat histories and prevent information they enter from being used to train the platform, some LLMs have not had such upgrades.

A law firm may want to sign a licensing agreement — with the AI provider or the platform that incorporates the AI — with strict confidentiality provisions that explicitly prevent uploaded information from being retained or accessed by unauthorized persons.

But even with having such an agreement, legal professionals should regard an LLM as being a still-insecure venue. And they certainly shouldn’t put any confidential information into a public model, such as ChatGPT.

 

What are the primary ethical pitfalls of using generative AI and LLMs?

Generative AI can’t replace human expertise, nor take the blame for what’s ultimately a human error. The buck will always stop with litigators when they use generative AI in their legal work.

Similar to the penalties that litigators could incur for the conduct of non-attorneys they supervise or employ, those who use generative AI to assist them in legal work without proper oversight could be charged with several ethical violations.

 

Blog preview about GenAI and ethics post Blog

How generative AI can enhance your legal work — without compromising ethics

View blog

 

Misunderstanding technology 

For one thing, attorneys have an ethical obligation to understand a technology before using it, stemming from their duty to provide a client competent representation — see ABA Model Rule 1.1.  

An attorney must have the knowledge necessary to provide such representation, which means that an attorney must keep up with changes in relevant technology and reasonably understand its benefits and risks. Several jurisdictions have specifically incorporated a duty of “technological competence” into their ethical rules, which includes generative AI. 

How should one be technologically competent when it comes to AI systems? Be familiar with how the platform was trained, all its known limitations, and for which tasks it can be used. Legal professionals should have a sense of the general quality of the information that an LLM produces. It’s arguable that an attorney who fails to review LLM output for accuracy is committing an ethical violation. 

Failure to protect confidentiality

Protecting client confidentiality is another significant ethical concern. As Ashraf notes, “generative AI by its nature is designed to be trained (or learns) at least in part based on information that users provide. So when litigators use generative AI to help answer a specific legal question or draft a document specific to a matter by typing in case-specific facts or information, they may share confidential information with third parties, such as the platform’s developers or other users of the platform, without even knowing it.”

This runs directly against an attorney’s obligation not to reveal confidential information relating to client representation without the client’s informed consent. It’s also against their obligation to make reasonable efforts to prevent unauthorized disclosure of information relating to client representations.

 

When litigators use generative AI to help answer a specific legal question or draft a document specific to a matter by typing in case-specific facts or information, they may share confidential information with third parties, such as the platform’s developers or other users of the platform, without even knowing it.

Rawia Ashraf

 

AI-Assisted Research GIF showing legal question field with references AI-Assisted Research on Westlaw Precision 

Get a relevant answer with links to trusted Westlaw authority in moments

Jumpstart your legal research

 

 

What types of procedural and substantive issues are likely to rise?

What types of procedural and substantive issues are likely to arise stemming from the use of generative AI and LLMs?

The growing use of LLMs will push the limits of existing discovery and evidentiary rules and create new procedural issues. As Practical Law’s Ashraf notes, “courts will likely face the issue of whether to admit evidence generated in whole or in part from generative AI or LLMs, and new standards for reliability and admissibility may develop for this type of evidence.”

There could be strict prohibitions against using generative AI-produced information in regulated industries, such as banking and finance, “where the underlying basis for the information may not be clear or easily explained to regulators,” Ashraf said. “An increase in class action lawsuits by plaintiffs ranging from consumers to artists in various areas of law is also likely.”

The use of generative AI could inspire a wave of litigation in terms of substantive issues.

  • Legal malpractice claims — for example, a client claiming that their attorney’s use of generative AI was an inadequate replacement for their stated legal expertise
  • Copyright claims — for example, whether it’s a fair use of copyrighted material to train generative AI — see “AI as Intellectual Property” below
  • Data privacy claims — see “Data protection and privacy issues” below
  • Consumer fraud claims such as a company using generative AI to fake positive reviews of its products or services
  • Defamation claims such as a plaintiff claiming defamation via inaccurate claims made by AI systems

 

Key issues by practice area

Generative AI and LLMs have implications for virtually every industry that a lawyer may represent, and will affect nearly any type of transaction or practice of their clients.

 

Jump to:

icon-orange and black checklist   Commercial transactions

icon-case management   Case management

icon-legal research   Legal research

icon-speaking bubble   Communication and collaboration

icon-3 black squares one orange diamond   Artificial intelligence

icon-legal research   Legal research

icon-speaking bubble   Communication and collaboration

icon-3 black squares one orange diamond   Artificial intelligence

Commercial transactions involving AI

When AI becomes a core part of a commercial transaction, aspects of standard commercial agreements may have unique negotiation issues. These include the following risk allocation provisions:

  • Representations and warranties — for example, does a vendor’s representatives and warranties concerning its AI system’s performance adequately address the potential business impacts of a system failure?
  • Indemnification — when an AI system’s decision-making process results in a liability, how do you determine whether the AI provider or its user caused the event giving rise to liability?
  • Limitations of liability — For instance, if an AI data analytics system inadvertently discloses user information, then the commercial provider may face third-party data breach claims from those users.

AI and product liability

The more companies integrate AI into their products and systems, the greater the potential for AI to be the basis for plaintiffs seeking damages. Generative AI’s ability to act autonomously raises novel legal issues.

One key question is how to assign fault when the product at issue incorporates AI. Litigants and courts have begun to test the application of traditional legal theories to injuries involving AI products, such as self-driving vehicles and workplace robots.

For example, Cruz v. Raymond Talmadge d/b/a Calvary Coach involved an AI-incorporated GPS device. A bus struck an overpass and injured plaintiffs. They brought claims against the manufacturers of the GPS devices that the bus driver had used based on traditional theories of negligence, breach of warranty, and strict liability.

Data protection and privacy issues when using AI

Organizations using personal information in AI may struggle to comply with the host of state, federal, and global data protection laws, such as those that restrict cross-border personal information transfers.

Some countries, particularly those in the EU, have comprehensive data protection laws that restrict AI and automated decision making involving personal information. Other countries, such as the U.S., don’t have a single, comprehensive federal law regulating privacy and automated decision making. This means that parties must be aware of all relevant sectoral and state laws, such as the California Privacy Rights Act of 2020.

Many of these data protection laws include a requirement to minimize the amount of personal information that an organization holds about an individual and a demand that AI algorithms used should be transparent, explainable, fair, empirically sound, and accountable.

AI as intellectual property

Companies creating with and employing AI and related technology may face unique intellectual property (IP) issues. Among these are the following:

  • How to protect AI IP such as registering patents, filing copyrights, or claiming AI use as a trade secret
  • How to determine ownership of AI IP
  • How to determine whether the company is the victim of AI IP infringement

AI in bankruptcy

While the Bankruptcy Code doesn’t specifically define AI, treatment of AI in bankruptcy appears to be a straightforward matter when the debtor owns the AI software and does not license it to a third party. Here the AI is considered property of the debtor’s estate under section 541 of the Bankruptcy Code, so the debtor can sell it free and clear of all claims by third parties.

Complications may arise, however, if a debtor has instead licensed AI software from or to a third party prior to bankruptcy and if the license is deemed an executory contract subject to the debtor licensor’s right to reject, or assume or assign the agreement.

AI in the workplace

Modern workplaces are increasingly reliant on AI to perform human resources and employee management functions. All of this carries some potential risk, including:

  • Recruiting and hiring — for example, generative AI screening and recruitment tools could discriminate against individuals with a disability, if, say, AI tools analyze an individual’s speech pattern in a recorded interview and then negatively review someone with a speech or hearing impairment.
  • Onboarding employees such as AI systems where research on employee backgrounds could potentially violate employees’ privacy rights under various applicable laws like biometric and password protections.

AI issues regarding health plans, HIPAA compliance, and retirement plans

There are many uses for generative AI in the health plans context. The use of robo-advisers in particular alerts plan participants of various options to recommend cost-saving prescriptions and procedures, and identifies high-cost health plan medical claims in administering benefits.

These uses also raise compliance and security issues, including under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the federal privacy and security law that applies to health plans.

Using generative AI in retirement plans raises similar issues, particularly with employing robo-advisers to analyze and allocate resources by risk profiles, and using algorithms to target investment returns.

Questions include:

  • How robo-advisors will meet fiduciary duty requirements under ERISA
  • How fiduciaries should evaluate and monitor robo-advisor performance, and
  • How AI fee analyses comply with a sponsor’s legal obligations to pay reasonable fees and expenses

Antitrust considerations when using AI

AI usage carries a number of potential antitrust risks, particularly related to unlawful anticompetitive behavior. For instance, AI systems could be used to facilitate price-fixing agreements among competitors. The Department of Justice has already secured guilty pleas from parties using pricing algorithms to fix prices for products sold in e-commerce.

Antitrust risks could also arise from AI systems themselves engaging in anticompetitive behavior or otherwise seeking to lessen competition. For example, an AI system could develop sufficient learning capability upon assimilating market responses and then conclude that colluding with a competing AI system is the most efficient way for a company to maximize profits.

 

Back to all key issues

 

More about legal issues 

The relationship between generative AI and the legal industry is well established now, and it will only continue to grow.

Notwithstanding the many benefits that AI and LLMs promise for litigators, there is a substantial amount of risk. Legal professionals must be capable of identifying key issues concerning AI, how they’re developing, and how they may be used in the future.

 

Blog preview about Ask PL AI Ask Practical Law AI

Introducing Ask Practical Law AI on Practical Law: Generative AI meets legal how-to

Read blog post

 

More answers