Commentary: AI offers banks new ways to repeat old mistakes
Generative artificial intelligence tools risk adding errors and hallucinations to issues of bias and bad advice, says Bloomberg Opinion's Paul J Davies.

The Morgan Stanley headquarters building is seen on Jan 17, 2023 in New York City. (Photo: AFP/Michael M Santiago/Getty Images)
This audio is generated by an AI tool.
LONDON: The extraordinary hype around artificial intelligence this year touched the finance industry, too, but most banks have been rightly cautious about jumping directly onto the bandwagon. In such a tightly regulated business, the costs of getting it wrong could be extreme.
Several big banks started the year by banning employees from experimenting with ChatGPT in their day jobs. They were concerned about the factual errors, “hallucinations” and risks of plagiarism inherent to the public version of the text-generation tool. Behind the scenes however, the same lenders have been scrambling for tech talent, while exploring how they might use generative AI and what it could do better than the predictive AI, or machine learning, tools used for years in fraud detection and anti-money laundering work.
In September, Morgan Stanley became one of the first major banks to roll out a true generative AI tool in its business - an internal chatbot for its financial advisers based on OpenAI’s technology, which is meant to help all of its advisers appear as smart as its best ones.
It was a daring move. The big risk with generative AI is that these tools are designed to prioritise fluency over accuracy, as a recent report from UK Finance, an industry trade body, and Oliver Wyman, the consultants, put it.
They are designed to sound convincing, as if there were solid reasoning as well as knowledge behind their words. In fact, there is no individual agency within and less than perfect data recall.
AI DOESN'T SOLVE BANKS' PRE-EXISTING PROBLEMS
Generative AI also hasn’t solved pre-existing AI issues that can pose big risks to banks: The potential for bias in the data or outputs and the lack of transparency in how its conclusions were reached.
When you sell someone an investment product or tax-efficient structure, you need to be sure it suits the client. If you turn someone down for a home loan, you must be able to show it isn’t because of their skin colour. Banks’ humans have managed to get this stuff wrong for years without any help from robots.
“Banks need to deploy safe, fair and effective AI to mitigate risk and build trust,” Mike Mayo, analyst at Wells Fargo, wrote recently. “No bank wants to be the poster child for getting AI wrong.”
Morgan Stanley protected itself by limiting the source data its chatbot uses to its own intellectual capital. This has been thoroughly checked to ensure it is accurate and organised for the AI in ways meant to ensure it consistently fetches the right answers, according to Jeff McMillan, head of analytics, data and innovation.
The bank also limited its potential responses. “If you ask it about how to open a trust account, you will get that information,” McMillan said via email. “But if you ask it about a football match, it will simply say it does not know. And we believe ‘I do not know’ is a valid and fair response.”
This is a good limitation. Just like some people, generative AI can be prone to framing uncertain or incorrect answers in a high degree of confidence.
WHOLE NEW WORLD OF PERILS AHEAD
It is early days, but while US banks seem more hopeful of the revenue-boosting potential of generative AI, that’s less true in the UK. Only 13 per cent of financial groups think AI will boost sales, according to a survey by UK Finance. Most said the main benefits will be in operational efficiencies.
Still, only 9 per cent of the 23 big finance companies that answered UK Finance’s survey had deployed simpler predictive AI tools for multiple tasks, while 12 per cent had deployed them for less than a handful. No respondents had used generative AI widely and only 5 per cent had used it narrowly.
However, 17 per cent were running pilot programmes or proof-of-concept experiments with the newer technology in the hopes of using it in future.
The benefits that banks expect don’t necessarily include big staff reductions. Many firms see these tools as enhancing staff capabilities - like with the Morgan Stanley chatbot - and becoming a kind of co-pilot for idea generation, customer services or managing and presenting data.
The fallibility of generative AI means there could still be lots of work in checking its outputs. Meanwhile, different skills might be more useful than those held by many bank employees today. For example, Morgan Stanley found that the best people at writing prompts to improve its chatbot were youngsters with liberal arts degrees who are good at communicating.
To illustrate the AI hype, consultants at McKinsey this summer claimed that generative AI could add value to companies of between US$2.6 trillion and US$4.4 trillion annually - so, potentially more than the current gross domestic product of California. McKinsey reckons banks could capture as much as US$340 billion of these profits.
Such forecasts are silly and will likely never be proven right or wrong. But banks will try and use AI in any way they reasonably can - finance is the original information-technology business, after all.
They should move slowly and carefully, however. There’s a whole new world of perils ahead.