Legal AI: The promise and the risk
Using AI in the practice of law is no longer a bold innovation. From legal research and document review to draft creation, AI has become the new baseline for workflows. However, there are vast differences in the AI tools available. Empowering legal professionals with the right AI platform means choosing domain-specific systems built for accuracy, speed, and precision. To be effective, these priorities must also be supported by a tool that guards data security and protects confidential information.
For legal teams in government, private practice, and in-house roles, security is a strategic imperative. It’s not enough to follow compliance checklists. Legal leaders can shape what’s possible by using purpose-built AI to protect sensitive data, maintain public trust, and meet the highest ethical and regulatory standards without compromise.
Generative AI (GenAI) and agentic AI are fast-emerging technologies with powerful advantages, but data security is a present concern for legal professionals. According to the recent Future of Professionals Report, 42% of those surveyed said the security of AI-powered technologies is a barrier to investment. Another 13% said data security implications are the most significant negative consequences of using AI.
Why do concerns remain?
New technologies often provoke concerns about cost, training, and effectiveness. The accessibility of consumer-grade AI makes dabbling in this technology seem like an easy, low-cost way to get started. But recent court cases have heightened the concerns for many.
In multiple cases, legal professionals have faced serious consequences for relying on tools like ChatGPT to generate legal filings, only to discover that it had fabricated case law:
- In a 2023 federal lawsuit against Avianca Airlines, a lawyer cited six nonexistent cases, complete with fake quotes and citations. The judge called the filings bogus and considered sanctions after the attorney admitted he had used ChatGPT, which falsely assured him the cases were real.
- In California, a Los Angeles-area attorney was fined $10,000, the largest known penalty for AI-generated legal fabrications, after submitting a brief with 21 out of 23 quotes made up by ChatGPT. The court issued a scathing opinion and warned that similar incidents were appearing in other jurisdictions.
- Attorneys in Wyoming were sanctioned for submitting a motion with eight fictitious citations while representing a client in a suit against Walmart. The questionable motion, drafted using AI, highlights the growing risks of using consumer-grade tools for professional legal work.
These cases underscore a critical point — generic AI tools may be easily accessible, but they aren't built for legal precision. When lawyers rely on them without verification, they risk ethical violations, reputational damage, and legal penalties. Leaders in the legal profession choose professional-grade AI tools as part of their strategy for excellence.
Sensitive data needs professional protection
Legal professionals handle some of the most sensitive data imaginable, including client identities, financial records, government documents, contracts, customer personally identifiable information, and privileged communications. Protecting this data is a legal and ethical obligation for law firms, courts, and government agencies. The following are common data security concerns.
Unauthorized data sharing
Free or consumer-grade AI tools may store, analyze, or share input data with third parties, which can violate confidentiality agreements and professional conduct rules. With professional-grade AI, legal professionals retain control over data.
Data-sharing example
A corporate counsel at a medium-sized company uploads a draft contract containing both parties' financial details into ChatGPT to polish the language. Without the attorney’s knowledge, OpenAI's terms of service allow the platform to retain and review user inputs for training and moderation — and to publish them on the web in a searchable manner. That means the company-confidential information, including names, account numbers, and deal terms, could be accessed by platform reviewers or used to improve future models.
Lack of encryption
Public AI tools use basic or unclear encryption protocols. Sensitive data can be vulnerable during transmission or storage; professional-grade AI tools use end-to-end encryption at all times for security.
Encryption risk example
A government attorney stores sensitive case files and agency documents on a cloud-based platform that lacks robust encryption. The system relies on basic password protection and the data is not encrypted while stored. When the platform experiences a security breach, unauthorized actors gain access to the server. Because the files weren't encrypted at rest, hackers can immediately access confidential information, including personal identifiers, financial records, health data, and internal legal assessments.
This incident not only compromises protected constituent data but also exposes the agency to compliance violations under federal data protection laws and undermines public trust in government legal operations.
Data-retention policies
Many consumer AI platforms retain user inputs, which means they could potentially store client information indefinitely.
User-input retention example
An attorney uses a free AI chatbot to summarize client deposition transcripts. The attorney pastes sections of the transcript — including names, medical history, and case strategy — into the chatbot. What the firm doesn't realize is that the platform's data-retention policy allows it to store user inputs indefinitely. As a result, client data may remain within the platform's ecosystem and could be exposed to internal audits or reused in future AI development.
Unclear data ownership
If legal professionals use AI tools without a clear understanding of the terms of service, they may unknowingly surrender control over proprietary or client data.
Data ownership example
A midsize law firm uses a free AI writing assistant to draft internal strategy memos and client communications. The tool's terms of service are vague about data ownership and retention.
Without realizing it, the firm has loaded proprietary litigation strategies and client details into a system that claims broad rights to use, store, and even reproduce user inputs. Later, the firm discovers that similar phrasing from their confidential memo appears in unrelated content generated by other users of the same AI platform. Because the terms allowed the provider to reuse inputs, the firm has unknowingly surrendered control over its data.
Regulatory compliance risks
Improper data handling can breach standards under the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the American Bar Association (ABA), leading to fines or disciplinary action.
Compliance risk example
A law firm uses a cloud-based AI tool to analyze client medical records for a negligence case. The tool isn't HIPAA-compliant, and the firm hasn't signed a business associate agreement (BAA) with the provider. As a result, the client's protected health information is processed without proper safeguards or legal authorization.
This misstep violates HIPAA regulations and ABA confidentiality standards. If regulators discover the breach, the firm could face fines, audits, disciplinary review, and reputational damage. Additionally, states are passing laws governing AI use by their employees. If the AI is not secure and an employee uses it, they could face repercussions.
Elevate legal practice with professional data security
Professional-grade AI tools offer a distinct advantage in data security. Built with rigorous safeguards, they protect sensitive information. Choosing a platform that meets standards such as ISO/IEC 42001:2023, FedRAMP, or Soc2 Type 2 signals a commitment to confidentiality and integrity — core legal values that professionals build their careers on.
Certifications and standards
ISO/IEC 42001:2023
The first global standard dedicated to Artificial Intelligence Management Systems (AIMS), ISO/IEC 42001:2023, offers a structured framework for organizations to establish, operate, and enhance AI-management platforms.
When applied to legal tools, ISO/IEC 42001:2023 helps ensure AI systems are ethical, transparent, and secure in supporting compliance, protecting client data, and reinforcing public trust. It sets a standard for responsible AI governance in law.
Federal Risk and Authorization Management Program
The Federal Risk and Authorization Management Program (FedRAMP) establishes a unified framework for assessing, authorizing, and continuously monitoring the security of cloud services used by U.S. federal agencies. Using a FedRAMP-reviewed solution means you are using technology that meets rigorous federal standards for data protection, system integrity, and operational resilience.
FedRAMP is vital for government lawyers because it ensures cloud platforms meet federal security practices. It protects sensitive legal data, supports compliance with privacy laws, and reduces exposure to breaches, audits, and legal liability. By requiring monitoring and agency authorization, FedRAMP helps legal teams confidently adopt AI and cloud tools while maintaining public trust and regulatory integrity.
SOC 2 Type 2
SOC 2 Type 2 compliance is an audit report that confirms an organization has effectively managed customer data over an extended period — typically six to 12 months — in accordance with the American Institute of Certified Public Accountants' (AICPA) Trust Services Criteria. This high standard in demonstrating a technology company's ongoing commitment to data security serves as a powerful trust signal to customers. It proves that the organization not only implements robust data-protection measures but maintains them rigorously over time.
Where the right technology elevates your organization
Enhanced client trust
The commitment to using a secure platform fosters trust. Clients and constituents are more willing to share sensitive data when they know it will be handled securely. Professional-grade AI tools signal that legal teams take confidentiality seriously, reinforcing confidence in both private counsel and public institutions.
Lowered risk
At the same time, secure platforms reduce risk. Professional-grade AI helps legal teams avoid data breaches, ethical violations, and regulatory penalties often associated with consumer-grade tools that lack strong encryption, governance controls, or clear retention policies.
Reduced liability
Using enterprise-grade AI lowers liability. Platforms with clear governance, retention policies, and audit trails help legal teams demonstrate compliance with laws like GDPR, HIPAA, and ABA standards — protecting both organizations and individual practitioners.
Improved competitive advantage
Leading with data security is a strategic advantage. Legal teams that adopt professional-grade AI tools demonstrate to clients, courts, and regulators that they embrace innovation without compromising confidentiality. By prioritizing secure, compliant technology, they strengthen case outcomes and also distinguish themselves in a competitive, high-stakes environment.
Empowered professionals
When lawyers have faith in their tools, they can do more. The advantages of GenAI and agentic AI are compelling reasons to use a professional platform. Without the worry of second-guessing a consumer-grade platform, legal professionals can spend more time on strategy.
Ultimately, choosing professional-grade AI is an act of strategic foresight. It anticipates threats, aligns with regulations, and builds lasting trust.
Data security: A defining standard
In legal and government sectors, data security isn't a secondary concern. It's a defining standard. As AI becomes embedded in legal workflows, professionals must choose whether to lead with secure, professional-grade tools or risk compromising excellence with generic platforms.
CoCounsel Legal is an AI-powered legal assistant built on trusted Westlaw and Practical Law content. It is used by U.S. federal courts and more than 20,000 law firms and legal departments, including 80% of the Am Law 100.
We do not train it using any customer data, but on content generated and scrutinized by a team comprising hundreds of bar-admitted attorneys and data scientists. CoCounsel Legal sets the standard for responsible, high-performance legal AI.
Thomson Reuters is widely recognized as a leader in the advancement of AI for professionals. In addition to meeting standards like ISO/IEC 42001:2023, SOC 2 Type 2, and FedRAMP In Process, we have created our own standards and principles for developing legal AI tools, which provide the right foundations to build trustworthy, practical solutions for you. Explore our AI principles.