In the closing hours of last week’s FinovateFall 2023 conference, AI entered the conversation across presentations and a panel discussion, rounding out the event that examined numerous facets of the future of finance. Informa, InformationWeek’s parent, hosted the conference in Manhattan.
A rather familiar face from the New York innovation scene, Brett Martin gave a quick history lesson on AI’s development that — despite how it may seem — began long before ChatGPT’s skyrocketing popularity. Martin is co-founder of venture capital firm Charge Ventures and has also been in the trenches as a startup founder. The fintech sector, he said, was an early adopter of AI with the US government putting the technology to work in the 1990s in fraud detection.
“In 1993, they caught $1 billion dollars’ worth of fraud in the first year they rolled it out,” Martin said. “So, there’s nothing new about using AI in financial services.” He parsed out some of the differences between traditional, predictive AI, which might be used to spot anomalous transactions that stray from usual buying patterns and could indicate fraud, from generative AI whose transformer models generate content. “New content could be text, it could be audio, it could be video,” Martin said. “It could be data itself.”
The so-called magic of generative AI is its ability to synthesize information to create net-new content, he said. This could include summarizing earnings calls, which Martin soberingly joked would have eliminated his job from 20 years ago typing up notes. “I would be eliminated today,” he said. “You can have a computer do that.”
He also cited the intensive compute power needed to run the algorithms that drive AI juxtaposed against the GPU shortage. “We have to deal with the pros and cons of this technology,” Martin said.
There is some messiness with generative AI that, thanks to AI hallucination, can be a bit all over the place. “It’s like your mansplaining best friend from high school that’s a know-it-all,” Martin said. “Sometimes it’s right; sometimes it’s wrong — you don’t know when.”
AI Use Cases in Fintech
A final panel, moderated by Theodora Lau, founder of Unconventional Ventures, on AI transforming the financial industry included Efi Pylarinou of Pylarinou Advisory; Maya Matthews, investor with March Capital; Elliott Star, former director of data science at Chime; Steve Dunn, head of innovation and fintech with Sumitomo Mitsui Banking Corp (SMBC).
Dunn said his bank explores innovation that includes AI but in a risk-controlled way. “When it comes to gen AI, there’s a lot of opportunities, a lot of risk at the same time.” So SMBC took a multitiered approach to such efforts, he said, with support from the boardroom down, an agile governance structure to support adoption and reduce impediments to innovation, and the deployment of a scaled incubation function.
“This AI incubator is really designed to rapidly test business cases, measured on feasibility, usability, and value,” Dunn said. The process is to invite teams to run experiments in a safe and secure environment, he said, enabling the teams to build up muscle memory and increase their AI literacy for ongoing innovation efforts. “You need to be able to experiment and the best way to do that is to incubate,” Dunn said. “AI, at some point, is going to be embedded in everything we do, so you need to think about that whole change environment.”
Of course, some organizations, especially in the heavily regulated financial world, are not immediately ready to take on the changes and risks that can come with the use of AI. Star questioned why some financial institutions were blocking the use of AI, which would make sense out of a security concern for data leakage. “If they worried about the fact that people are going to use it to generate things they wouldn’t have otherwise generated and put those into the economy of the company — is that really a problem?”
For example, if an employee used ChatGPT to create a marketing plan, Star did not see much of an issue. When it comes to the generation of something to be used by customers, such as an investment synopsis, the same kinds of checks and guardrails in place for human creators could be put to work with AI content, he said. “Any controls you had on a human doing something stupid are also sufficient on controls of something like generative AI doing something stupid.”
While there is plenty of hype in the market that surrounds AI, Matthews offered a venture capital perspective on the rise of the technology. She said there is also real value being created, which includes productivity gains within organizations thanks to AI. As often happens when a growing innovation takes center stage, startups that work with AI are attracting the attention of larger incumbents who want to tap into their more nimble, experimental ideas. “There are enterprises and organizations that are prioritizing leaning into partnering with earlier-stage companies,” Matthews said. “Innovation, especially within the startup community, has definitely yielded some very high valuations, but also on the revenue side there is a ton of value being created.”
AI could be a great opportunity because of the hype, Pylarinou said, since it calls more attention to existing examples that organizations can learn from. “Businesses should go back and look at AI risk management frameworks,” she said.
Such frameworks have existed for at least three or four years, Pylarinou said, offering companies that want to explore the technology a starting point to reference. “It is time, and an opportunity, for businesses that weren’t deploying AI, even traditional AI, to really go find those best practices and adopt them,” she said. How agile and ready companies are to embrace AI, Pylarinou said, could determine how the technology affects them in the long run. “There are precedents — we should take advantage of that.”