Skip to content
Artificial Intelligence

Building trust in AI to keep firm and client data safe

· 7 minute read

· 7 minute read

Ensure client confidentiality through rigorous solution due diligence

Highlights

  • Rigorous vetting of AI providers is essential, requiring legal professionals to evaluate a vendor's history, expertise, and specific security measures to protect sensitive client data and ensure confidentiality.
  • Legal practices must identify and avoid vendors that exhibit red flags, such as providing vague answers on security, lacking a clear data deletion process, or being reluctant to disclose details about third-party data sharing.
  • Professional-grade AI solutions should offer customer-first data storage policies that ensure user control, implement stringent security controls, and demonstrate compliance with data protection regulations like GDPR and CCPA.

 

Today’s legal professionals are exploring new AI solutions available on the market. But how does a firm or other legal practice make the right choice? Our new white paper, “Evaluating AI solutions for legal professionals,” discusses in detail how practitioners can assess potential AI tools and providers. This includes scrutinizing a solution’s capabilities and reliability for research, contract drafting, and other use cases.

Legal professionals also need to determine how well a vendor secures proprietary data. They know that building trust in AI is crucial, especially when it comes to protecting client confidentiality. Legal practices regularly handle sensitive information, such as a client’s personal and financial records or a company’s projected sales and expense budgets. Misuse of or unauthorized access to such data has significant legal and ethical consequences. Inadequate security measures can result in data breaches, which can then lead to malpractice claims.

As the new white paper observes, “selecting an AI provider is perhaps the most critical choice a legal firm will make this decade.” That means that “a poor decision wouldn’t be merely a strategic misstep — it could be catastrophic.” AI systems lacking adequate security measures risk exposing this data to hostile actors and putting the law firm in violation of its obligations. Security requirements make determination of a potential AI solution’s data protection capabilities essential.

 

Jump to ↓

How to conduct rigorous security due diligence


What an AI vendor should provide


The right tool for the practice

 

White paper

White paper

The future of the law firm: No time for bystanders amid AI’s increasing influence

Access white paper ↗

How to conduct rigorous security due diligence

The need for stringent security continues to grow. Just how critical data privacy and security are to legal practice has been underscored by the spike in cyber-attacks. According to the 2024 ABA Legal Technology Survey, 60% of law firms have implemented formal cybersecurity policies, but phishing and ransomware remain major threats.

The white paper offers expert guidance on how to conduct thorough due diligence on all aspects of potential vendors and the AI products they offer. A rigorous vetting process investigates a provider’s history, expertise, and clientele. The fact is, all AI solutions aren’t created equal. Some are general-purpose consumer-level models, and some focus on specialized data for specific purposes. General-use AI tools like ChatGPT don’t fulfill the strict obligations attorneys have to protect privileged work-product and confidential client information.

That’s why lawyers need to thoroughly assess a provider’s data security levels and confidentiality practices before entrusting it with any discretionary information. As they pursue the vetting process, legal professionals should be alert to potential red flags from prospective AI vendors.

For instance, lawyers will want to know specifics about where data generated by an AI system will be stored and who will have access to it. The AI provider should reveal whether data created in communications and transactions is used for training their AI model. If so, who retains ownership and rights to information that legal professionals enter into the AI solution? Who owns the material the solution generates? In addition, a law firm should be able to delete any data upon request.

Firms also need to know whether an AI system will share data with third-party vendors or service providers. If it is shared, they’ll want specifics about how secure the sharing process is, who these parties are, and what type of data they will receive.

One red flag is if a potential vendor responds to cybersecurity questions with vague references to “industry-security standards” but offers no concrete details. Other warning signs include a lack of a discernible deletion process and what appears to be unnecessarily long data retention periods. Yet another is a reluctance or even stonewalling on the part of a provider when it comes to revealing third-party involvement, claiming it is “proprietary information.”

Infographic

Infographic

Choosing the right legal AI partner: A comprehensive worksheet

View infographic ↗

What an AI vendor should provide

A professional-grade AI solution should be able to offer the following data security protocols:

Customer-first data storage policies

It’s critical that legal professionals control how their data is used, accessed, and stored. They should always retain control over their data and be able to remove it completely from the platform at any time.

Stringent security controls

Providers of AI solutions for professional use should employ a sophisticated, multifaceted security program that goes beyond locking down the AI platform and client data. Law firms typically adhere to several data protection regulations, notably the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). A secure solution also should comply with these regulations. It’s even better if they can meet additional guidelines for both internal and external security resources such as the NIST 800-53 Moderate and NIST cybersecurity frameworks, and internationally recognized standards for information security management such as ISO 27001 and SOC 2.

Given the stringent requirements of data security in legal practices, it makes sense to choose an AI product specifically built for use by legal professionals. An example is CoCounsel Legal, the professional-grade AI solution from Thomson Reuters, which integrates Westlaw and Practical Law content with Microsoft 365 and fulfills the security standards that legal practitioners need to meet.

The right tool for the practice

By asking the right questions and insisting on transparency from the vendor they choose, legal practices can protect sensitive information and build confidence in AI solutions while improving their practice.

Choosing an AI provider can be an overwhelming decision for a law firm. Read our white paper on evaluating legal AI solutions to sort through all the details and develop a clearer picture of the AI market, along with various vendors and products.

White paper

White paper

The pitfalls of consumer-grade tech and the power of professional AI-powered solutions for law firms

View white paper ↗

More answers