As Wall Street’s adoption of artificial intelligence speeds up, the call for industry guardrails gets louder.
According to a survey conducted by Broadridge Financial Solutions in May, 84% of firms are at or have moved beyond AI-based proofs-of-concept.
One-in-five of the respondents said that their firms are using the technology in production while slightly more, 29%, are running AI-based pilots.
“AI has the potential to transform capital markets and the capital markets industry radically,” Michael Tae, head of strategy for Broadridge, told Markets Media. “There are opportunities for service improvements, faster front-to-back trade lifecycle processing, risk reduction, and improved cost efficiency.”
Over time, AI may even redraw the map of what is considered the financial sector, R. Jesse McWaters, financial innovation lead, World Economic Forum, testified before the House Financial Services’ Task Force on Artificial Intelligence.
“Small and mid-sized financial institutions that are unable to invest in becoming AI leaders, may instead choose to employ AI capabilities of third parties on an as-a-service basis,” he said. “These seismic shifts in the landscape of financial services create new risks. The enormous complexity of some AI systems challenge the traditional models of regulation and compliance.”
Untested AI technologies
A significant concern put forward during the hearing was that a wide swathe of AI techniques have not gone through a financial crisis and remain untested.
“There as several instances where algorithms implemented by financial firms appeared to act in ways quite unforeseen by their developers, leading to errors and flash crashes,” noted Bonnie Buchanan, the head of School of Finance and Accounting and professor of finance, Surrey Business School, at the University of Surrey, during the hearing.
There is a tendency in the trading world where machine-to-machine interactions result in feedback loops that have had damaging impacts, agreed fellow Congressional witness R. Jesse McWaters, financial innovation lead at the World Economic Forum.
Scenario-based models and continuing standard stress tests should address the issue, he added.
However, one of the most popular AI implementations, machine learning, bring a host of issues with it, according to Douglas Merrill, founder and CEO of ZestFinance and who testified before the task force.
“Machine-learning models are inherently opaque and inherently biased,” he said. “You only know that a machine learning model has made a decision but not why it made that decision. Without knowing why a decision was made leads to bad outcomes.”
Although AI implementations are new for many financial institutions, they do not challenge the fundamental principles of existing regulatory frameworks, according to McWaters.
The Federal Reserve, OCC, and FIDC published guidance on effective models for risk management in 2011, but Congress could encourage the regulators to update their guidance with best practices in machine learning modeling as well as validating, monitoring, and auditing the models, suggested Merrill.
“I’d hope through either Congressional intervention or regulatory intervention we would come to a world in which there would be a language describe what is acceptable before you build a model and agreed upon language at the end of models to show if you do indeed have a bias problem,” he said. “The odds are good that you are going to.”