Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Risk and Fraud

AI and fraud: Are you ready?

· 8 minute read

· 8 minute read

← Blog home

Jump to ↓

Embrace change

Research new technology

Continue to learn

 

You might have recently read about a French woman who was conned out of roughly $830,000 by scammers using images of actor Brad Pitt generated by artificial intelligence. While this isn’t a situation that risk and fraud professionals are likely to face, it points to the undeniable fact that bad actors are using AI for fraudulent purposes. The World Economic Forum recently noted that “the rise of generative AI [GenAI] has significantly increased the scale and sophistication of cybercrime, particularly identity theft and fraud.”  

As AI’s use becomes more widespread and its capabilities more sophisticated, risk and fraud professionals need to manage the fraud risk it generates. AI is introducing new risks whose impacts will require these professionals’ particular vigilance. Adopting AI into workflows can actually help risk and fraud professionals manage those risks.

All of this is possible by properly preparing for the AI-powered future in three key areas.

 

Embrace change

While AI creates new types of fraud risk, its benefits far exceed its dangers. In fact, 30% of corporate risk and fraud departments surveyed in the 2024 Generative AI in Professional Services report say that their organizations are either using or planning to use GenAI. These professionals see how AI can make their workflows more efficient. Risk and fraud professionals can use AI to analyze large datasets and identify patterns to more quickly detect fraud. They also can activate AI tools to reveal valuable industry insights. 

All that said, risk and fraud professionals have many risk concerns about using AI. Increasingly sophisticated fraud is just one of those worries. According to the Thomson Reuters GenAI report, concerns like the following can be barriers to AI adoption:  

  • Potential for an AI tool to deliver inaccurate responses  
  • Privacy and confidentiality of sensitive data uploaded onto GenAI tools 
  • Compliance with relevant laws and regulations  

These are legitimate concerns. But they shouldn’t prevent risk and fraud practitioners from embracing the benefits and opportunities that AI can offer.  

How to responsibly use AI

These professionals can do a great deal to manage the risks of AI by working to ensure that their organizations are using AI tools ethically and responsibly. This is of particular concern in risk and fraud practices because AI itself can be a powerful weapon in fraudsters’ arsenals. Adding to this challenge is the fact that regulations regarding AI use are still emerging. This is still an evolving technology, after all. And given AI’s use of large language models that allow it to “learn” and improve, it’s evolving at an almost bewildering pace.  

There are two fundamental ways for professionals to manage and comply with the ethical and regulatory issues of using AI: 

  • Maintain human oversight and intervention. Organizations should establish protocols to ensure that the relevant professionals are reviewing the tool’s output.  

Risk and fraud professionals should stay on top of technological and regulatory changes as AI evolves, as it garners more and more industry and governmental scrutiny.  

 

Research new technology

To help assuage these risks and worries, risk and fraud professionals need the right kind of AI tools. Such a tool should deliver significant, measurable benefits. The risk and fraud professionals surveyed in Thomson Reuters GenAI report identified the following use cases that AI could automate and speed up:  

  • Risk assessment and reporting 
  • Knowledge management 
  • Document drafting, review, and summarization 
  • Finance (this was cited as a use case by corporate risk professionals) 
  • Contract drafting and legal research (these use cases are of particular interest to government risk professionals)  

One of the fundamental ways that professionals can manage AI risk is by having access to timely, accurate data regarding changes in regulations and industry best practices. AI’s automation capabilities can help deliver that information quickly. A well-designed AI tool also can limit human error in documents and research. And again, professional-grade AI technology should help risk and fraud professionals manage risk as well as assess it. That capability should include enhanced fraud detection and prevention. 

What to look for in an AI assistant

Given these requirements, risk and fraud professionals will want to conduct rigorous due diligence on potential AI tools. They should determine whether the technology under scrutiny can speed up the tasks listed above and whether it can do so reliably—while still helping them manage AI fraud risk. One type of AI deserving a particularly close look is a GenAI assistant, a digital tool that automates necessary but time-consuming work so that professionals can focus on more profitable uses of their time.  

Here are a few essential questions risk and fraud professionals should ask as they conduct their due diligence:  

  • Will the tool access and provide reliable information? A GenAI assistant should access data from credible industry sources—and not glean potentially false or incomplete “answers” from unreliable content.  
  • Does the technology developer understand the challenges that risk and fraud professionals face? The vendor should be able to prove it has a deep knowledge of fraud risk and ethical concerns within these professionals’ fields  
  • How experienced is the vendor’s AI talent? Does the company offering the tool have an extensive track record in AI development? Can it demonstrate an ability to update and innovate as new challenges and needs arise?  
  • How does the tool help protect data? Risk and fraud professionals don’t want to put their organizations and clients at risk for AI-powered fraud, after all.  

 

Continue to learn

Again, AI is ever-changing. That makes it crucial for risk and fraud professionals and the organizations that employ them to stay current on those changes. This is an essential part of successfully incorporating AI into professionals’ workflows. Here are some practical ways to prepare for AI’s impact and optimize the technology’s benefits:  

Join AI groups

These groups allow risk and fraud professionals to share insights, stumbles, and success stories with colleagues. It’s particularly beneficial for these professionals to seek out groups in their industry or field. Many such groups sponsor conferences and webinars that can provide additional networking and learning opportunities.   

Attend tutorials and courses, virtually or in person

Professionals will want to focus on education and training platforms that offer lessons most applicable to their professional practice. AI training and education, like the technology itself, will evolve over time, with new risks (and benefits) arising and new lessons to learn. Among the skills that risk and fraud professionals are likely to need to learn more about are data analysis (since AI’s power is rooted primarily in data) and data security (since proprietary data needs to be protected).  

Keep up with AI reports, blogs, and other information sources

These written materials include reports and articles that provide actionable insights into current use cases of AI technology and future impacts on risk and fraud professionals’ practices. Discussions about new products represent another source of useful information.  

By continuing to learn, risk and fraud professionals ensure that their organizations can confidently face the future of AI and profit from its use. Those efforts include maintaining compliance with regulations and industry standards regarding ethical AI usage. Just as crucially, these professionals can maximize AI’s capabilities to manage AI fraud risk—even when the fraud’s perpetrator isn’t an AI-generated Hollywood star.  

 

 


Thomson Reuters is not a consumer reporting agency and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application.

More answers