Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Ethical uses of generative AI in the practice of law

· 9 minute read

· 9 minute read

Overview of ethical uses of generative AI (GenAI) in the legal sector, its history, and today's challenges

In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law. 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today. He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available.  

Jump to ↓

Brief history of AI in law


Ethical AI frameworks


Implementing AI ethically for legal


How to choose an ethical legal AI solution

Legal AI all-in-one

Legal AI all-in-one

When trusted professional-grade AI meets more than 150 years of authoritative legal content and expertise.

See what AI can do for you ↗

Brief history of AI in law

Groff then provided a brief history of AI in the legal sector, focusing on the evolution of large language models that have been tailored to meet the specific needs of legal professionals. Groff also described the various types of AI tools available today, giving an overview of both general-use AI tools and those specifically designed for legal tasks to illustrate the broad spectrum of technologies available to legal professionals today. 

It’s worth noting that these technologies are developing within broader regulatory frameworks, including the EU AI Act, ABA ethics opinions on technology, and various state bar association guidelines addressing AI use in legal practice. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency. 

Key legal tech milestones: 

  • 2012: Predictive coding receives judicial approval in Da Silva Moore v. Publicis Groupe 
  • 2016: ROSS Intelligence launches IBM Watson-powered legal research platform 
  • 2018: Contract analysis platforms begin using machine learning for due diligence 
  • 2021: GPT-based legal drafting assistants emerge 
  • 2023: Domain-specific legal LLMs with reduced hallucination rates 
  • 2024: Integration of legal reasoning capabilities with trusted source verification 
  • 2025: Deep research and AI agents surfaces the industry, elevating the impact of what AI can do

With a long history with AI, Thomson Reuters has consistently pushing the boundaries of what’s possible. Thomson Reuters has developed tools reflecting this evolution include Westlaw Edge’s predictive analytics, Practical Law’s AI-enhanced content, and CoCounsel’s specialized legal capabilities built on domain-specific models.

Especially now, with the rise of agentic AI (not to be confused with generative AI), there are ways legal professionals can continue to save time and boost productivity. With agentic AI, a single prompt can trigger multi-step tasks. Agentic AI can create a broader workflow than GenAI can, helping with tasks like performing legal research, comparing draft documents, and preparing for a deposition in a single project.

E-book

E-book

An essential guide for bringing AI agents to your organization

Access e-book ↗

Ethical AI frameworks

Basic assumptions about legal ethics and AI 

“AI should act as a legal assistant, not as a substitute for a lawyer.”

Ryan Groff

Massachusetts Bar and lecturer at New England Law

Groff set the stage by asserting these five principles: 

  1. AI as assistant, not substitute: AI should function as a legal assistant, enhancing lawyer capabilities rather than replacing professional judgment. 
  2. Attorney responsibility: Lawyers bear ultimate responsibility to verify all AI outputs before relying on them professionally. 
  3. Avoiding unauthorized practice: Unsupervised AI use may constitute unauthorized practice of law by proxy. 
  4. Maintaining competence: Lawyers must understand AI capabilities and limitations for appropriate deployment. 
  5. Preserving confidentiality: Client confidentiality must be protected when using third-party AI platforms. 

Failure to adhere to these ethical principles may result in disciplinary action, malpractice exposure, and breach of fiduciary duties to clients. No matter what tools they use, it is important for lawyers to maintain independent judgment and commit to ethical practices in law. 

Understanding responsible deployment 

“Responsible development of AI is crucial for its ethical application in law.”

Ryan Groff

Massachusetts Bar and lecturer at New England Law

Highlighting the ethical deployment of AI, Groff stressed the importance of user control and accountability, advocating for a thorough understanding of AI’s capabilities and limitations to ensure its appropriate use. 

Evaluating ethical AI framework: 

  • Clarity: Can the AI clearly explain its reasoning process? 
  • Accuracy: Is the output factually and legally correct? 
  • Explainability: Can the lawyer understand and explain how the AI reached its conclusions? 
  • Auditability: Can the process be reviewed and verified if challenged? 

Potential and limitations of AI in law 

“AI can support creative tasks but should not handle substantive legal decisions alone.”

Ryan Groff

Massachusetts Bar and lecturer at New England Law

Groff advised, pointing out the critical need for domain-specific engineering in AI tools to ensure they are fit for legal purposes. He also provided several notable case studies that serve as practical examples of both the potential and the pitfalls of AI in legal practice. 

Specific limitations and implications: 

  • Hallucinations: AI can generate plausible but false information, particularly concerning in legal contexts where accuracy is paramount 
  • Data bias: Models trained on historical cases may perpetuate existing biases in the legal system 
  • Lack of transparency: “Black box” functionality can make it difficult to understand reasoning 
  • Limited contextual understanding: AI may miss nuanced client needs or jurisdiction-specific requirements 

Mitigation strategies: 

  • Human-in-the-loop review: Implement mandatory human verification protocols 
  • Red-teaming tools: Systematically test AI tools to identify vulnerabilities before deployment 
  • Restricted use cases: Limit AI to lower-risk tasks until reliability is established 
  • Documentation: Maintain records of when and how AI was used in client matters 
Infographic

Infographic

Data and AI ethics principles. Promoting trustworthiness and integrity in AI development and deployment.

View the 8 ethics principles ↗

Beyond understanding ethical principles, legal professionals need practical guidance on operationalizing AI within their practice. Here’s a framework for implementation for legal: 

Phased AI integration strategy: 

  1. Assessment phase: Identify specific use cases where AI adds value 
  2. Testing phase: Pilot AI tools in controlled environments with benchmarking 
  3. Deployment phase: Implement with clear policies and training 
  4. Monitoring phase: Establish ongoing quality control and feedback mechanisms 

AI use policy components: 

  • Allowed uses: Research assistance, document review, initial drafting 
  • Limited uses: Client communications, court submissions (with verification) 
  • Prohibited uses: Unsupervised legal advice, autonomous decision-making 

Circumstances warranting disclosure: 

  • When AI substantially contributes to legal analysis or strategy 
  • When client data is processed through third-party AI platforms 
  • When AI is used for tasks traditionally performed by attorneys or paralegals 
  • When disclosure might impact attorney-client privilege considerations 

Real-world example: A law firm successfully implementing ethical AI use might employ a contract review system with attorney oversight, where the AI flags potential issues but attorneys verify all findings and maintain client communication. The system maintains audit logs showing which sections were AI-analyzed and which received human review, creating accountability and transparency throughout the process. 

Staying on top of ethical practices 

“Lawyers must maintain tech competence and ensure ongoing learning.”

Ryan Groff

Massachusetts Bar and lecturer at New England Law

Groff reiterated that core ethical rules apply to AI without exceptions. He also noted the necessity for lawyers to maintain technological competence and engage in ongoing learning to effectively supervise AI legal assistants and ensure confidentiality and effective communication. 

Emerging governance frameworks, including the NIST AI Risk Management Framework and industry-specific best practices, will help address these issues over time as the technology matures. 

Staying current on ethical tech use: 

  • Consider courses and certifications specifically addressing AI ethics in legal practice 
  • Join legal technology committees through bar associations 
  • Sign up for the free monthly Thomson Reuters AI newsletter to get insights on how AI is transforming the legal industry and notifications for future webinars. 
  • Participate in legal conferences to understand emerging capabilities 

E-book

E-book

A legal professional's handbook for reliable AI tools

Access e-book ↗

Groff then wrapped up the session by recapping the key points discussed, reminding legal professionals of the importance of continuous learning and adaptation to integrate AI ethically and effectively into their practices. He closed with a reminder that technology must enhance rather than replace the human element in law. 

Groff made a clear differentiation between general AI tools and those specifically engineered for legal applications, highlighting the importance of selecting the right tools for specific legal tasks. 

Evaluation criteria for legal AI solutions: 

  • Accuracy: Verify factual and legal precision against trusted sources 
  • Confidentiality: Review terms of service for data retention and use policies 
  • Terms of use: Assess licensing, liability, and compliance with firm policies 
  • Integration: Consider compatibility with existing workflow systems 
  • Training: Domain-specific legal training vs. general-purpose models 

As AI technology continues to evolve, so too will the ethical considerations surrounding its use in legal practice. Legal professionals must remain vigilant, staying informed about emerging best practices and regulatory developments. Thomson Reuters is committed to providing ongoing insights and solutions to help navigate this changing landscape. 

CoCounsel Legal

CoCounsel Legal

Bringing together AI, trusted content, and expert insights

Go professional-grade AI ↗

More answers