Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Risk and Fraud

Outsmart AI risks with these 3 essential strategies

· 5 minute read

· 5 minute read

Stay ahead of emerging threats with actionable steps for your company

More and more businesses and agencies are incorporating AI tools into their operations as they realize AI’s productivity benefits. As AI becomes more widespread and its capabilities grow more sophisticated, corporate risk and fraud professionals face new and constantly evolving risks. 

 Let’s examine three potential risks that especially concern these professionals and then explore strategies to mitigate these risks and maximize AI’s significant benefits. 

Jump to ↓

AI Risk #1: Ethical concerns

AI Risk #2: Data privacy

AI Risk #3: Security vulnerabilities

Understanding the most effective strategy

 

AI Risk #1: Ethical concerns

AI generates outputs using algorithms that people develop and information that people provide. As a result, AI may be subject to bias risk mitigation. In other words, the data used to train an AI tool may generate outputs that favor one outcome over another. AI models also may be vulnerable to hallucinations – outputs that are factually incorrect or nonsensical.  

Users need to be able to trust the information an AI tool delivers. That information needs to be reliable and free of bias.  When AI systems lack transparency, users struggle to understand how the system produces its outputs or to address errors and bias. 

These biases can cause serious problems. Some years back, many companies used AI-based recruiting software in hiring use cases that tended to reject non-white applicants. More recently, a large health insurer has been accused of using AI models in healthcare use cases that often deny needed care or prematurely terminate coverage. While AI outputs can support decision-making in various use cases, businesses must continue to rely on human input to ensure outcomes are as unbiased as possible.

Mitigation strategies

These concerns point to the importance of maintaining human oversight of AI tools. Organizations must be able to verify the specific sources of information that the tool has retrieved. In addition to implementing ethical guidelines and frameworks for AI use, businesses and other users should conduct regular audits and assessments to ensure that staffers are accessing these tools correctly and fairly. One useful strategy to mitigate potential ethical risk is to build diverse AI teams that can identify and minimize bias.  

 

AI Risk #2: Data privacy

Businesses know their IT systems store sensitive data about their operations and customers. Many professionals worry that AI systems might compromise their data and make it publicly available in real time. AI systems require access to large amounts of data, including sensitive personal information, which raises concerns about privacy violations and the need for strong safeguards. Data breaches also occur when cybercriminals use deepfakes and other AI-powered phishing techniques, often bypassing traditional safeguards.

Mitigation strategies

Organizations should establish clear and robust data protection policies and safeguards, if they haven’t already. These should include investing resources in the most current data security and encryption technologies capable of providing real-time threat detection. And given the importance of human oversight, organizations should train employees on data privacy and compliance protocols—and offer regular updates on these protocols. Threats from malicious AI use constantly evolve, so ongoing, real-time vigilance is essential.  

 

AI Risk #3: Security vulnerabilities

Organizations must address new and evolving cybersecurity risks. AI can make fraudsters’ and cybercriminals’ deceptive activities even more effective. Adversarial attacks present another set of risks. For example, threat actors can manipulate an AI platform’s input features—the data points and elements it processes and learns from This can include data poisoning, which involves injecting manipulated or mislabeled data into training datasets and corrupting the AI’s learning process. Security vulnerabilities can expose a company to financial penalties, legal repercussions, and reputational damage.

Mitigation strategies

Organizations can mitigate AI-related security risks by applying the same best practices they use for general cybersecurity. These include developing comprehensive security measures specifically for AI systems. One best practice is to collaborate with cybersecurity experts with experience in safeguarding AI applications.IT teams should also regularly update and patch AI software to address vulnerabilities—a critical responsibility for any IT department.

Understanding the most effective strategy

For the mitigation strategies of each of these risks to be effective, organizations need AI platforms that draw from reliable sources. They also need tools that can uncover evidence of potential fraud and cybercrime activity, including instances of fraudulent AI use. One such tool is the Risk Analysis Summary feature that’s part of Thomson Reuters Risk & Fraud Solutions platform. The tool provides an organization’s risk and fraud team members with concise insights and AI-powered investigative capabilities. Take the next step in responsible AI adoption by learning more about the Risk Analysis Summary from Thomson Reuters.

 

Risk Analysis Summary

Risk Analysis Summary

Quickly take your investigation from insights to action

Explore the possibilities ↗

Disclaimer

Thomson Reuters is not a consumer reporting agency, and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application. 

More answers