Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

How to use AI and keep firm and client data safe

· 7 minute read

· 7 minute read

Get the benefits of AI without putting your information at risk

Law firm checklist: Client portals

Law firm checklist: Client portals

Make client collaboration your competitive differentiator

Access checklist ↗

Jump to ↓

The need for stringent security continues to grow


Using generative AI on its own isn’t worth the risk


What to look for in a professional-grade legal AI solution

Legal professionals have learned two important things about Generative AI during the past two years. First, that GenAI can deliver excellent work much more quickly than a human. And that all GenAI is not the same: some are general-purpose consumer-level models, and some focus on specialized data for specific purposes.

With the right expertise, building a solution that uses the power of AI and is trustworthy enough for legal practice is possible, and such products are quickly becoming must-haves for attorneys. Equally important as an AI product’s reliability, though, is its ability to keep confidential firm and client information secure and private.

The need for stringent security continues to grow

Just how critical data privacy and security are to legal practice has been underscored by the spike in law firm data. Since 2020, more than 750,000 Americans have had their personal information compromised as a result of law firm cyberattacks.

And while consumer-facing products powered by large language models (LLMs), do protect users’ data, that protection doesn’t rise to the level of both security and privacy required for high-stakes situations involving privileged and highly confidential information.

For starters, comprehensive privacy and cybersecurity protocols, rigorous audits and testing, and a high level of domain-area expertise are required to create AI solutions that meet the standards legal practitioners must adhere to. Building a generative AI-powered solution that’s both reliable and secure enough for use by legal professionals isn’t necessarily easy, but it is possible — we’ve done it with CoCounsel.

Using generative AI on its own isn’t worth the risk

Because of its prevalence and powerful capabilities, many lawyers already incorporate AI-powered tools into their practice. Some are even still using free general-purpose models like ChatGPT, despite known data security and privacy risks. In response to ChatGPT’s data leak and requests for measures to protect personal data, OpenAI has implemented a Personal Data Removal Request form allowing users to ask that their information be deleted.

But this protection is limited to users based in countries such as Japan and GDPR-protected Europe. And even if a removal request is approved and OpenAI does not retain information provided in ChatGPT conversations, it appears the data may still be used to train the model.

Given these risks, general-use LLMs like ChatGPT don’t fulfill the strict obligations attorneys have to protect privileged work-product and confidential client information.

When considering integrating an AI solution into your practice, it’s crucial to choose a product specifically built for use by legal professionals. This kind of carefully engineered, professional-grade AI — such as our AI legal assistant, CoCounsel — can both capture the power of advanced LLMs and eliminate security and data privacy risks.

For instance, security and privacy should be integral to a product’s creation, not add-on features. As pointed out in Harvard Business Review, companies practicing top-notch cybersecurity are committed to “ensuring security is not an afterthought through processes such as DevSecOps, a method that integrates security throughout the development life cycle.” When building CoCounsel, we began with security in mind, evidenced by, among other considerations, our requirement that our AI models never store our users’ data or use it for training.

What to look for in a professional-grade legal AI solution

When considering using AI in their practice, attorneys should look for these four key indicators of high-level security measures:

Customer-first data storage policies

It’s critical you, as the customer, control how your data is used, accessed, and stored. In stark contrast to ChatGPT, CoCounsel accesses OpenAI’s GPT-4 model through private, dedicated servers, and through a zero-retention API.

All data is encrypted in transit and at rest. This means OpenAI cannot store any customer data longer than required to process the request and cannot view any of that data or use it to train CoCounsel’s underlying LLM. You always retain control over your data and can remove it completely from the platform at any time.

Stringent security controls

Those providing AI for professional use should employ a sophisticated, multifaceted security program that goes beyond securing just the AI platform and customer data. Look for both internal and external security resources.

For example, Thomson Reuters has taken extensive measures to ensure that CoCounsel aligns to NIST 800-53 Moderate and NIST Cybersecurity Framework, two of the most respected security frameworks in the industry. And CoCounsel’s security controls are mapped onto ISO 27001 and SOC 2 standards, which are internationally recognized as best practices for information security management.

Additional protocols include a rigorous vendor vetting and management program, independent verification, auditing, and testing, and up-to-date, comprehensive Incident Response and Business Continuity and Disaster Recovery plans.

A long record of success

A more nebulous but still important factor is how long the AI developer has been in the business, meaning not just legal tech generally, but in the very complex business of building LLM-powered products for legal professionals.

Examine a company’s performance history, specifically in the areas of security and expertise in AI. Prior leaks or other security incidents are obvious red flags. If a developer has only been in AI for a year or two, there’s little record to examine in terms of both incidents and expertise.

Thomson Reuters has not only been at the forefront of applying the power of LLMs to the practice of law for several years, but has provided customers with a safe and secure platform for more than a decade.

Adoption among industry leaders

Who does the legal AI provider count among its clients? Peer firms’ adoption of an AI solution indicates whether its security program is robust enough for law practice.

Take CoCounsel as an example. More than 40 of the Am Law 200 have subjected CoCounsel to rigorous security review. Thousands of client firms have integrated CoCounsel into their practice, including top-ranked firms such as DLA Piper, Troutman Pepper, and Dykema, as well as Fortune 50 companies, including Ford and Microsoft.

The fact that some of the world’s leading firms and businesses trust Thomson Reuters to securely manage their most sensitive data speaks volumes.

When integrating AI into practice, attorneys need to know they’re using a platform they can trust, meaning one that will ensure they meet their obligations to protect client data and privileged work product. Look for providers who adhere to industry-leading security frameworks and are committed to data privacy, as demonstrated by their company’s history, expertise, and clientele.

Interested in learning more about how CoCounsel can help you deliver your work product faster and more efficiently? Request a free demo today.

Related blog

Related blog

Enhancing client relationships with better data privacy and security practices

View blog ↗

More answers