Chat OMG?? Navigating the Future of Generative AI in Financial Services

I am paying close attention to how vendors are purporting to integrate AI into financial services.  A recent article from The Financial Brand definitely caught my eye titled, “ChatGPT Will Become ChatOMG! In 2024.”  The story highlights a study by Forrester which predicts that multiple FIs, both traditional and neobanks, will suffer negative consequences in their efforts to deploy Generative AI into financial services.  Specifically, the report called out “rogue employees” or third-party vendors being the culprits who would include AI without the proper oversight or vetting. This comes on the heels of the Biden Administration announcement of an executive order regarding the use of AI by U.S. businesses. If implemented according to the order, banking regulators would create new rules and the resulting examiner attention to use of AI by financial institutions and related companies.

The report notes that negative issues related to use of AI could be inadvertent carelessness or could be an intentional violation of rules governing AI’s use.  Examples cited include copyright infringement, release of PII, and failure to offset AI bias. I would add to that list the likelihood of AI creating false or inaccurate information that is not corrected. Any of these could trigger customer complaints or even lawsuits. It is for this reason that most large FIs have severely curtailed the manner in which ChatGPT can be used, even in pilot or beta programs. More specifically, the report details large financial institutions will likely have gaps in their AI usage just based on the size of the organization. Contrast that with the neobanks who have both fewer staff to monitor AI usage and less restrictions on how it might be used. Because Large Language Models (LLM) like ChatGPT ravenously consume data, the biggest concern is how to prevent it from accessing customer data. Even if the goal is paternalistic towards providing a better customer experience, once the tool has access to customer data, it may not be possible for it to be used in unintentional ways that violate privacy.

While it is still early, the examining bodies are already weighing in on use of Generative AI in banking services. The OCC has stated that it wants its FIs to ask for permission, not forgiveness regarding AIs use.  There is particular concern on how AI might be used in making banking decisions such as opening an account or approving a loan.  What cannot happen is an AI decision model to essentially be a “black box” where the processes within are obfuscated from review. The Biden executive order requires companies to demonstrate that their use of AI is “safe” before they can be deployed.

I encourage you to read the full article, a link to which is included at the end of this article.  Financial industry professionals must take the issue highlighted by the Forrester study seriously. This is not the time for FIs to implement AI frivolously or to license services from 3rd parties that cannot demonstrate that their use of AI will pass regulatory muster. But neither should FIs look at the potential minefield of unknown future regulation and ignore generative AI altogether. This is a game changing technology and like many such technology breakthroughs over the decades, it will take smart, reasoned strategic planning to ensure that its use in financial services does no harm, increases the customer experience, and meets all regulatory requirements. This we can do as long as we remain vigilant and keep our eyes wide open.




The views expressed in this blog are for informational purposes. All information shared should be independently evaluated as to its applicability or efficacy.  FNBB does not endorse, recommend or promote any specific service or company that may be named or implied in any blog post.