WHITE PAPER

Why general counsel need more than ChatGPT for legal AI

Make sure you’re using professional-grade AI

It’s a now-common scenario — a high school teacher quickly determines that a student's paper is the work of the AI model ChatGPT. Giveaways include stilted language, obvious factual errors, a lack of verifiable references, and jumps to unsupported conclusions. The student fails or rewrites the paper.

Some lawyers fear that using AI for a legal task will produce work of a similar caliber to this hypothetical student’s paper. They’re afraid to suffer the same fate as the many “ChatGPT lawyers,” whose output was inferior and rife with errors — including hallucinated inaccurate data analyses or case summaries — and contained poorly argued briefs a judge will find baffling.

Because these stories are so common in the media, some law department leaders still think “AI could turn out to be a fad. It will never reach the stage where a legal professional will feel confident employing it for use in litigation, or to fulfill a business’s objectives. So why should we invest in it?”

It’s a reasonable argument when you’re talking about public AI. It’s quite likely that general-purpose tools will never be up to the task of generating sophisticated legal work that holds up in a courtroom or properly protects a business. That’s why legal teams need to distinguish between publicly available, no-cost solutions such as ChatGPT and a professional-grade AI solution specifically designed for use by lawyers.

These two types of solutions already differ a lot in reliability and accuracy, and the gap is growing. It’s like the difference between searching Google to see what a spot on your arm might be and having that spot examined by a dermatologist.

Making an AI commitment

A legal professional risks making a serious mistake by dismissing AI as a fad. It’s now a matter of when, not if, your organization will adopt it in some form. That’s why choosing the right AI tool for your department is crucial.

The savings potential for a legal team using AI is vast. According to the Thomson Reuters 2025 Future of Professionals Report, it is estimated that AI will save professionals about five hours per week or 240 hours a year, up from 200 in 2024.

That extra horsepower can go a long way on a legal team that is always working beyond capacity. Still, it’s important to note that not all AI is created equal. The potential savings and efficiency improvements become real only when you choose a tool you know you can trust.

When a lawyer needs to use the correct case citation, precisely define terms in a contract, or confidently rebut an opposing counsel’s claim, the lawyer requires a resource built on secure, up-to-date, and fully verified data. ChatGPT isn’t designed for these tasks, but the best legal AI solutions are. It’s time to find the optimal system for your organization.

The AI paradigm shift

Timing is everything for a busy legal team and the business it supports. Law departments that adopt AI-powered solutions quickly stand to reap substantial benefits in efficiency and productivity. Those gains will help them provide more effective legal counsel while managing their budgets in the new, AI-driven paradigm. As other parts of the business implement sophisticated technology, the law department will want to keep pace.

AI tools automate, heighten, and accelerate a wide array of tasks for attorneys, particularly time-consuming document analysis, research, and drafting tasks. By using AI for such tasks, attorneys can devote more time to higher-level work, such as strategic planning and developing relationships with business partners.

Very quickly, AI becomes a force multiplier, giving smaller teams the ability to punch above their weight and have more control over outside counsel costs. For example, a 20-person team armed with a professional AI system could do equivalent work, if not more, than a larger team that delayed adopting new technologies.

Reducing time-consuming, repetitive grunt work also enables lawyers to devote more time to strategy and polishing their work, giving them a better chance at greater work-life balance and increasing job satisfaction. In the most recent Generative AI in Professional Services Report, 55% of respondents said they were excited or hopeful about AI technology. This represents a combined increase of 11% when compared to 2024. Respondents cited time savings, streamlined work processes, increased productivity and efficiency, and new opportunities for innovation and growth as the top reasons for being "excited".

In many cases, corporate legal teams are leading the market toward greater AI adoption, encouraging law firms to adopt these efficiency and productivity tools. The logic is sound — as a client using AI, it’s reasonable to expect your firms to do the same, enhancing efficiency in both your internal operations and external counsel costs.

As a law firm client, corporate legal departments have more leverage in pricing discussions now, whether by advocating for AI and technology to reduce billed hours or moving to flat-rate engagements.

“AI is an inevitable solution in my opinion, and resources that are not fit to be used will find themselves at the back of the line,” said one government legal advisor, interviewed for a previous Thomson Reuters Institute report. “There is great risk of loss of performance compared to the economy if we do not use it; it is the same issues as when Google released its famous white search box. And, if you don’t use a search engine, you’re out.”

Professional AI versus public AI: A tale of two different worlds

Legal professionals can gain a major advantage by using professional-grade AI tools. Legal experts design and constantly update these systems to address lawyers’ specific needs. They run on trusted, secure databases and provide transparent, well-sourced, and verified reasoning.

Within the category of professional-grade AI solutions, measurable differences exist among platforms. Recently, Vals.AI (Vals) surveyed leading professional AI services, looking to see how they compared against each other in such areas as data extraction, document Q&A and summarization, redlining, transcript analysis, chronology generation, and Electronic Data Gathering Analysis and Retrieval (EDGAR) Research.

Accuracy: On target with each shot or firing wildly

Public AI. Say a lawyer is searching for “most-favored nation” clauses within a set of trade agreements. If the lawyer uses a public AI solution, odds are the system will return an array of outputs of varying quality, some of which aren’t applicable in a legal context. Furthermore, the system could have difficulty when attempting multiclause extractions.

Professional AI. Professional-grade AI solutions offer greater clarity and reliability in terms of data extraction. In the Vals study, a set of professional AI systems was assessed to see how each identified and extracted information within the same document. 

Per Vals, “All AI tools were given the additional instruction to include only the extracted text in their responses, not any additional explanation. In all cases, a good response required both the verbatim extraction of specified information from the provided document and the source location for that information.”

The study found that professional-grade AI tools performed this task reasonably well, particularly with short documents and a single specific clause to extract. There were notable differences, however, when the systems were asked to extract multiple clauses for a single question — some systems stopped at one relevant clause without providing a substantial answer.

The most challenging question asked for 40 fields to be extracted across three credit agreements. CoCounsel from Thomson Reuters was the top performer in that task. CoCounsel and one competitor also performed above “lawyer baseline” — the proven performance for an experienced human lawyer doing the same task.

Content authority: Trusted sources or unreliable information

Public AI. An AI platform like the public-model ChatGPT is built on a foundation of opaque sources, ranging from public-domain books to material scraped from the internet, some of which may be copyrighted or inaccurate. A lawyer who asks a public AI system for a list of cases relevant to their current task will then have to scrub the results manually: is this derived from Wikipedia? Out-of-date legal references? News articles from generalist sources?

Professional AI. A professional-grade AI platform draws its content from industry-respected and thoroughly vetted sources, including federal and state legislation and case histories. When integrated with legal research tools such as Westlaw, users gain access to a verifiable and consistently updated database of case law, statutes, and legal opinions. 

Westlaw and other Thomson Reuters AI systems use a retrieval-augmented generation (RAG) system, meaning they provide links for you to read the source material yourself. This capability enables professionals to work with confidence, knowing their queries and research are grounded in authoritative content.

Additionally, a system like CoCounsel Legal has a dedicated method for benchmarking any new large-language models (LLMs). Each new LLM is run through a series of public and private legal tests to assess its aptitude for legal review and analysis. CoCounsel Legal’s Trust Team creates further test cases to evaluate output that comes from the LLM’s early integration with existing systems and, if the results are promising, a team of attorneys performs additional manual reviews.

Expert insights: Thousands of credentialed experts or a faceless crowd

Public AI. In public AI solutions, it can be challenging to determine how the system prioritizes which sources to use in answering a query. There’s a valid concern that the AI will assign the same weight to a trusted information source — such as a news organization like Reuters — as it does to a financial blog written by a layperson that hasn’t received an update since 2015.

Professional AI. Developers of a professional-grade AI model will explain to users how their system works, the processes that the model uses to answer queries and conduct data searches, and all steps the provider takes to refine its AI’s capabilities. A professional-grade AI provider will implement regular corrective measures to prevent data drift — any changes in the statistical characteristics of input data over time that degrade the model’s performance.

Hence, the importance of having a vast number of experts on hand. These experts keep the AI solution abreast of any new regulations, laws, or case interpretations, and help ensure that its output is consistently accurate. An AI system is only as strong as the people whose knowledge it draws upon

Data security and privacy: Fort Knox or Penn Station

According to the 2025 Generative AI in Professional Services Report from Thomson Reuters Institute, approximately 68% of professionals see data security as a top barrier to adopting AI. This is another critical distinction between public and professional-level AI products.

Public AI. Using a public product like ChatGPT to analyze proprietary information means that you’re often taking risks with your data. It can be unclear how these AI programs use prompts and other user data. It’s also unclear whether the system deletes or retains the information for training purposes after a query. If this information is confidential or copyright-protected, public AI products typically lack adequate safeguards to protect it.

Also, some public AI providers make vague or even evasive claims about data usage, the ownership of your data, how often the data gets deleted, and which third parties will have access to it. Users may not have the option to exclude their input from being used in training the AI system.

Professional AI. A professional-grade AI provider must reassure users that it is taking necessary measures to safeguard customer data. For example, CoCounsel Legal bases its cybersecurity program on several industry standards, including the National Institute of Standards and Technology (NIST) Cybersecurity Framework and the Capability Maturity Model Integration (CMMI).

It’s essential that users of professional AI systems retain ownership of all input and output data. The provider should describe how its model uses data for systemic improvements, outline the options for data anonymization, and explain how users can opt out of the training process.

Further, AI providers must establish a well-defined data deletion process with specific timelines to ensure user privacy and compliance. For example, all data is deleted after 30 days unless otherwise specified. They should also offer transparent disclosure of any third-party data sharing. In the event of a data breach, the platform should have a well-honed incident response plan, including communication protocols and guarantees of timely notifications to all relevant parties.

Training: Professional classroom or DIY

Public AI. Learning to use a public AI system often involves a do-it-yourself approach. While these systems are generally user-friendly and intuitive, there’s a learning curve in mastering query refinement to receive the best results.

However, it’s becoming an increasingly challenging task to keep up with AI’s rapid pace of adoption and to stay abreast of changes in AI systems. For example, the public version of ChatGPT was unveiled in November 2022 and has already gone through multiple revisions.

More than half of respondents in the recent Generative AI in Professional Services Report said that while they’ve had some AI training, it’s only on an annual basis — or even less frequently.

Professional AI. A professional-grade AI provider should deliver regular, intensive user training. Ultimately, an AI platform must be intuitive and supportive, so lawyers feel confident in their ability to use it effectively. This confidence includes ensuring their queries are accurately articulated and tasks are handled efficiently.

For example, CoCounsel Legal offers pre-recorded training modules meant to address typical user concerns and real-time connections with experts to clarify any aspect of the system.

Sidebar: Getting AI right in the corporate legal department

You’ve decided to use an AI tool to help your legal team respond to a high-stakes regulatory inquiry. Here are three essential steps to ensure your team protects the company’s credibility, manages risk, and meets professional standards.

Step 1: Trust — but verify

When responding to government regulators, board members, or outside counsel, every word matters. AI can help speed up document review, risk summaries, or compliance comparisons — but only if the outputs are reliable.

Unlike generic AI tools, professional-grade legal AI provides citations, source documents, and legal context for each answer. It doesn’t just make predictions — it grounds its work in verified law and internal documents, so in-house lawyers can move quickly without guessing or gambling.

Step 2: Protect your company’s reputation

When dealing with regulators or internal stakeholders, a single factual error — especially one tied to AI — can raise red flags about process, credibility, and controls. Imagine sending a regulatory response that misstates a rule or misinterprets a state statute. The cost could be reputational damage, a delayed deal, or an enforcement action.

That’s why it’s critical to use an AI system built for legal accuracy, transparency, and auditability. CoCounsel Legal draws from trusted sources like Westlaw and Practical Law, and is designed to support — not replace — the judgment of in-house counsel. With features like Deep Research and agentic workflows, it helps teams move fast without cutting corners.

Step 3: Know your stakeholders and your obligations

In-house lawyers work at the intersection of legal, business, and compliance. Whether you're briefing the board, preparing a public disclosure, or responding to a subpoena, transparency matters. Be prepared to explain how AI was used, how results were verified, and what content was relied upon.

With CoCounsel Legal, those conversations are easier. The platform includes built-in guidance and documentation to help teams disclose and defend their use of AI. This is essential for legal operations teams working under increasing scrutiny and managing growing volumes of work with leaner teams.

The value of going professional

The bottom line — in-house teams don’t just need AI; they need the right kind of AI. CoCounsel Legal helps you scale your legal work without risking credibility, compliance, or control.

It’s all about finding the right tools to do a complex job. CoCounsel Legal leverages the expertise of Westlaw and Practical Law to provide a robust solution you can rely on.

Bring together generative and agentic AI, trust content and expert insights with CoCounsel Legal — built and supported by a team dedicated to getting customers the maximum benefits of AI. Discover how it helps legal professionals work more efficiently and deliver powerful results for clients.H

CoCounsel Legal

AI lawyers swear by

Streamline complex, in-house legal work and drive business outcomes faster and more confidently