Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial Intelligence

Generative AI in banking: Opportunities and pitfalls abound with this new technology

· 10 minute read

· 10 minute read

Jump to:

border icon The benefits of generative AI in banking

 The pitfalls of generative AI in banking

 Current state of AI adoption in financial institutions



As generative AI makes great in-roads into many industries, financial services firms and banking institutions may face some of the biggest challenges with AI adoption, especially around how to assess the risks and how such risks can be minimized.

Synthetic businesses — false companies often created with a combination of real and faked documentation — are one of the major concerns of many companies when they start doing business with a new customer or vendor, according to new report by the Association of Certified Fraud Examiners (ACFE) and Thomson Reuters.

The survey report found that 61% of businesses said it’s moderately or extremely challenging to on-board a new vendor, while 52% said the same about on-boarding a new customer. Why this worry over being able to tell a fake business from a real one? In part because of the difficulty that many organizations — including financial services firms and banks — now are having conducting proper due diligence to ensure that an illicit actor hasn’t used generative artificial intelligence (AI) to create a fake business that appears very real on the surface, complete with official-looking documents and even a website.

“What we’re seeing with synthetic identity at fictitious businesses is that generative AI and machine learning can replace what was once done with a very manual process within criminal organizations,” said one of the authors of the report.

Welcome to the new environment of banking and business in a generative AI-driven world.

Generative AI, a subset of artificial intelligence that involves generating new content or data, has tremendous potential to offer various benefits and opportunities within the financial services sector. However, along with that comes potential pitfalls and challenges — such as much more sophisticated and harder-to-detect fraud — that financial institutions need to understand.

The benefits of generative AI in banking

Clearly, the benefits of generative AI to financial services institutions and banks are numerous, including in areas of fraud detection, better customer service, enhanced risk assessment, and more.

In fraud detection and prevention, for example, AI offers a vast advantage as it can analyze large amounts of transactional data to detect any unusual patterns and identify possible fraud, greatly increasing a monitoring team’s efficiency and effectiveness, especially compared to the traditional ways of working. In fact, this technology will also assist banks’ compliance teams with their anti-money laundering efforts, as well as reporting fraudulent scams and other suspicious activity more completely and more quickly.

Generative AI can be used to create more personalized customer interactions, such as with the use of chatbots that engage with customers in a human-like manner, providing assistance and information that can be tailored to individual needs. It can also enhance the overall customer experience, especially around on-boarding new customers or vendors, which can become much easier and less time- and personnel-intensive.

Indeed, generative AI can also greatly improve an institution’s employee training protocol, making the process much more valuable than what happens now in many organizations. Gen AI allows users to tailor a training program to the specific needs of the organization, creating a template of sorts that ensures future training will be more comprehensive and consistent throughout the organization.

And around risk assessment, AI models can determine credit risk more accurately and much faster by analyzing the financial history and other relevant data of an individual or a business, leading to better lending decisions. Finally, generative AI also will allow compliance teams and financial officers to identify more completely new or changing regulatory requirements and technological advancements that impact the industry or a specific client segment, sifting through to what is most important to financial institutions’ clients.

The pitfalls of generative AI in banking

Of course, it is not only roses that generative AI brings, there are also thorns lurking not far beneath. Some of the concerns that widespread use of gen AI carry include worries over bias, data security, increased sophistication among fraudsters, and how the technology may come to be regulated by the government. Of course, every pitfall has a mitigation solution, and financial firms and banks need to carefully ensure that any potential risk is being addressed.

Among the biggest concerns voiced from financial services institutions and banks, of course, is that these advanced and innovative tools make it much easier — and harder to detect — for illicit actors to engage in fraud and deception, swindling money from victims such as bank customers. Indeed, as generative AI and especially its public-facing portal ChatGPT gain in popularity, some of its fanbase are those who seek to commit fraud.

In fact, as more sophisticated versions of gen AI and ChatGPT evolve, bank officers and compliance professionals can expect to see instance of more persuasive phishing emails or more convincing impersonators scamming for information by telephone, as well as fake browser extensions, apps or malware created by gen AI. Not surprisingly, this all could lead to an increase in data breaches or blackmail schemes in which criminal organizations threaten to release personally identifiable information of bank customers.

Not surprisingly, this carries with it a worry over data privacy because generating customer-related content using AI may raise concerns, as sensitive information could inadvertently be exposed or mishandled, even unintentionally. The solution to this for banks and financial firms is the implementation of robust cybersecurity measures to protect AI systems from hacking attempts, data breaches, and unauthorized access.

Other areas of concern include gen AI’s propensity toward bias and unfairness. AI algorithms have shown that they can inherit biases from the data upon which they’re trained, leading to discriminatory outcomes in lending, hiring, and other areas if not properly monitored and addressed. Indeed, the complexity of AI models, particularly generative ones, can make it challenging to understand their decision-making processes, leading to difficulty in explaining the technology’s actions to customers, the market, or regulators. To combat this problem, financial institutions need to actively identify and address biases in the organization’s training data and its algorithms to ensure fair and equitable outcomes.

Finally, the financial services industry is highly regulated, and gen AI likely will only add to that scrutiny. Already, full regulation of AI by the government is under consideration, around the world and here in the United States, both on a federal and individual state level. Those financial services institutions and banks that implement AI systems will most likely be required to ensure compliance with an AI-focused version of various regulations related to data privacy, consumer protection, and much more.

Fortunately, gen AI itself gives institutions an effective way to stay up to date with evolving regulatory frameworks. And firm leaders should make sure there is collaboration among organizations’ legal, IT, and compliance teams to ensure the institution is complying with relevant laws and regulations.

Current state of AI adoption in financial institutions

Since its inception in November 2022, generative AI tools and processes have been making their ways into financial services institutions on a progressively quickening pace. Yet, while some institutions have successfully implemented AI solutions, others are still in the experimental or early adoption phases, often taking a more cautious approach based on their individual organization’s risk assessment.

Some of the key questions that financial services firms and banks need to address as they consider this adoption include those of data quality, technical expertise, and ethical considerations.

For example, AI models, including those that use gen AI to develop content or work product, require high-quality and diverse data to perform most effectively. Today, too often financial institutions struggle with siloed data and incomplete information, which would make AI-driven outcomes spotty at best and erroneous at worst. Any financial firm considering adopting AI-driven technology tools or process must first determine if the quality of their data is sufficient and accessible to the program. A full clean-up of all the institution’s data may be required before it would be wise to proceed with AI adoption.

Along with clean and robust data, financial institutions need to ensure that they employ the necessary technical expertise to operate a generative AI-enabled program properly. That means having professionals on board who can develop and maintain such AI systems. Be aware, however, that such professionals are in high demand right now, and that level of specialized technical skill may be in extremely short supply in the banking industry.

Another area that needs to be addressed before any AI adoption is considered is the ethical considerations that use of this technology carries with it. Indeed, decisions made by AI systems or the use of gen AI itself can have significant impacts on the lives of individual employees, customers, and vendors. This alone raises ethical concerns about accountability and transparency that firm management will have to consider.


As we’ve discussed, the use of generative AI in banking offers significant potential benefits but also comes with challenges and risks that must be carefully managed. Financial institutions should approach AI adoption with a comprehensive strategy that includes addressing biases, ensuring data privacy, complying with regulations, and maintaining transparency and accountability.

Indeed, it’s critical for financial institutions and banks to remember that ongoing monitoring and maintenance of their gen AI systems for performance degradation, concept drift, and unexpected behaviors is the best way to maintain their effectiveness. Also, collaboration is critical in the adoption phase. By fostering collaboration among technical teams, compliance officers, legal experts, and other business units, organizations can best ensure a holistic approach to gen AI use within the firm, especially as this new technology continues to evolve.

Thomson Reuters is not a consumer reporting agency and none of its services or the data contained therein constitute a ‘consumer report’ as such term is defined in the Federal Fair Credit Reporting Act (FCRA), 15 U.S.C. sec. 1681 et seq. The data provided to you may not be used as a factor in consumer debt collection decisioning, establishing a consumer’s eligibility for credit, insurance, employment, government benefits, or housing, or for any other purpose authorized under the FCRA. By accessing one of our services, you agree not to use the service or data for any purpose authorized under the FCRA or in relation to taking an adverse action relating to a consumer application.

More answers