Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Establishing trust in generative AI

· 5 minute read

· 5 minute read

OpenAI’s ChatGPT prototype opened a world of possibilities to generative artificial intelligence (AI), and the technology continues to revolutionize the way we live and work. As its role in our daily lives continues to expand, many lawyers and legal industry experts are beginning to recognize what these tools could mean for the profession. 

In a recent survey of lawyers conducted by the Thomson Reuters Institute, 82% of legal professionals said they believe that generative AI such as ChatGPT can be applied to legal work, with 59% of partners and managing partners sharing generative AI should be applied to legal work. Despite this large majority, organizations are still taking a cautiously proactive approach. 62% of respondents shared their concerns with the use of ChatGPT and generative AI at work, and all who were surveyed noted that they do not fully trust generative AI tools like ChatGPT with confidential client data.  

The introduction of any new technology comes with understandable hesitancy, but these key findings show one thing for certain: establishing trust in AI is essential. So how can legal professionals build trust in AI? 

 

Trust in AI starts with transparency

As explained in our companion article about the current state of AI, transparency in relation to AI means that stakeholders including developers, users, and legal practitioners should feel confident in the quality of the data that AI models use and how they make decisions. They should also be prepared to test AI outputs against their own knowledge and expertise in early implementations of those outputs.  

Furthermore, transparency gives confidence that ethical principles are being followed throughout the development and deployment of a generative AI system to avoid bias. This includes confirming that data used for training models is collected responsibly and without prejudice toward particular groups or individuals.  

 

Legal work and AI transparency 

The greatest and most fundamental trust generator is accuracy. Keep in mind that generative AI tools may not always produce correct results and that human oversight is still necessary. Therefore, lawyers should always review any documentation provided by an AI system to ensure that it meets standards for accuracy, explainability, and security.

Using generative AI with confidential client data also means firms must establish proper governance structures, including strong security measures such as:

  • Encryption and authentication protocols
  • Robust policies about ethical usage
  • Regular auditing and testing
  • Strong content filtration systems
  • Human in the loop (HITL): always submit large language model (LLM) outputs to human review (at least during the initial phase of working with LLM)
  • Knowing your user (for traceability)
  • Educating employees about the promise of LLM and its limitations
  • Appropriating personnel trained on best practices for using the technology securely and responsibly

Having these measures in place will help ensure that any risks are minimized while maximizing generative AI’s potential benefits. One consideration to take is adopting a “fail safe” system where any discrepancies or errors flagged by the AI tool are automatically escalated for manual review by a lawyer or other qualified personnel. This way, inaccurate decisions can be corrected quickly and confidently in a way that benefits both clients and organizations alike.  

It is clear that generative AI is poised to revolutionize the way the legal industry practices law, and it is here to stay. But with this emerging technology comes a crucial need for trust — giving lawyers the confidence that they are able to protect client interests while learning to do better work with an innovative solution.

 


Layered code and scripts for AI on black screen
What you should know before investing in an AI-powered legal tech provider

Here are three important criteria for choosing the right AI-powered legal tools.

Read blog post


Visit our hub on artificial intelligence

 

More answers