Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Using AI to mitigate risks and pinpoint dangers in adverse media screenings

· 5 minute read

· 5 minute read

Adverse media screenings — the process of searching for negative news as part of a comprehensive program to determine risk levels — continues to grow in popularity.

Those using this screening method are particularly interested in information related to cybersecurity, changing regulations, workplace risk, and business operations, and they use such screening programs to proactively review media for adverse information and safeguard the organization’s reputation during client or vendor on-boarding and other periodic reviews. In fact, the use of adverse media screenings have been given a shot in the arm by regulatory agencies that increasingly expect companies to perform robust customer due diligence checks as part of ongoing risk management programs.

However, it’s not always easy to use simple search methods to find vital information, or more challenging, to sort through the massive amounts of results that may be generated by simple searches. Google still ranks as the top service for adverse media screenings even though public search engines suffer numerous limitations, most often returning too many false positives — search engine matches that are delivered but once scrutinized by analysts, are found to be irrelevant. In fact, these false positives can significantly compromise the effectiveness of risk management and regulatory compliance programs.

Moving toward AI-driven searches

Fortunately, there are new technology solutions — such as Thomson Reuters CLEAR Adverse Media — that are driven by artificial intelligence (AI) and can greatly improve the effectiveness and efficiency of adverse media screenings. “With CLEAR Adverse Media, I can easily identify sanctions and PEP [politically exposed persons] status,” said one fraud analyst at a midsize professional services company. “It has helped sift through the noise of Google results much more effectively.”

By using search and filter options that are based on natural-language processing (NLP), these AI-driven search tools can more accurately identify media mentions that are more pertinent to the query being made by giving computers the ability to analyze and understand human language.

In fact, when investigators were asked about the challenges that led them to implement an AI-driven adverse search program like CLEAR Adverse Media, almost two-thirds (61%) said that it was because search engines like Google returned too much irrelevant information, and 79% of those surveyed said that CLEAR Adverse Media saves them time in their adverse media screening or sanctions screening processes.

This is possible because natural-language processing — a form of AI — uses algorithms to help computers understand the way humans speak and write, while allowing software systems to understand human commands, make sense of content, and cut through the vast amount of possible results to best determine what’s important. NLP also helps risk management teams prioritize and categorize data, tag relevant content, and identify duplicate stories, thus eliminating the need to manually evaluate volumes of content, saving valuable time, and reducing the possibility of human bias or errors.

Beyond NLP, innovative developments like machine learning (ML) focus on making computer programs that can access data and use it to learn for themselves. Over time, ML systems can improve from experience without requiring specific programming or tech-expert users. Not surprisingly, 77% of survey respondents said they chose CLEAR Adverse Media because of its ease-of-use.

Others were even pleasantly surprised with what critical information they were able to find in their AI-driven searches which previously had eluded them. “With CLEAR Adverse Media, we have been able to find additional information on some individuals that we weren’t able to through other means,” said a General Counsel at a small professional services company. “It also has helped us gain pre-bankruptcy information so that we aren’t in the dark about a client’s financial situation.”

These tools and their filters also can allow users to remove multiple hits for the same content, discern truly adverse events in search results, and even filter and categorize content to focus on what the analysts truly care about, and even search beyond traditional news source such as press releases, non-governmental organizations’ reports, third-party sources of structured data, and notices published by law enforcement agencies, tax authorities, and government agencies. Further, other ML solutions save time by discounting unimportant news, identifying false positives, and recognizing relevant media that needs further evaluation. ML technology can determine if a news article focuses on the subject of interest or just mentions the subject in passing. It also speeds up the review process by highlighting keywords and essential terms.

Of course, even the most advanced AI technology requires some manual processing. Corporate risk and compliance teams will still need to set up and update parameters in the system, configure the tools, customize the filters, monitor the information, and review the results. Sometimes, the high-quality and precise results can even surprise the users. “CLEAR Adverse Media saves us a lot of time doing due diligence,” said a compliance officer at a midsize financial services company.

As seen, AI-driven adverse media screening can greatly help companies maintain regulatory compliance and identify potential financial and strategic risks, giving corporate risk mitigation and fraud detection teams the necessary tools to easily access relevant information and conduct comprehensive, diligent investigations.

You can learn more in this recent Adverse Media Screening white paper.

Thomson Reuters is not a consumer reporting agency and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application.

More answers