The deployment of Artificial Intelligence (AI) in the financial sector is bringing both benefits and new or amplified risks. While supporting AI-driven innovation in finance, the OECD’s Iota Nassr looks at some of these risks and the different tools policy makers can use to address them.
The financial markets have historically been early adopters of innovative technologies, given the size and weight of market activity in the economy. Every wave of innovation in finance brings purported cost efficiencies, productivity enhancements, higher profitability and improved quality in services offered. Financial service providers deploying AI are looking to secure a competitive advantage, whether through enhanced decision-making, automated execution, gains from improvements in risk management, back-office optimisation or by offering new and highly customised products to their customers at a reduced cost.
But every wave of innovation also brings new risks or the potential amplification of existing financial and non-financial risks in the markets, and gives rise to financial consumer and investor protection concerns. In the case of AI, many of these risks stem from the fact that users only partly understand the underlying workings of the technology. This affects all different user sets and even has an indirect impact on those not using AI themselves. The difficulty in following the logic behind models and understanding why and how they generate results, generally described by the term explainability, is associated with a number of significant challenges related to the safety and soundness of financial markets. These challenges are associated with possible incompatibilities with existing financial supervision and internal governance frameworks, but also with the broader stability of the markets, especially given the potential for pro-cyclicality and systemic risk build-up.
Two practical examples illustrate the build-up of vulnerabilities:
In trading, although AI algos can increase liquidity during normal times, they can also lead to a convergence of strategies and by consequence to bouts of illiquidity during times of stress. Market volatility could increase through large sales or purchases executed simultaneously, thereby creating new sources of vulnerabilities. Convergence of trading strategies creates the risk of self-reinforcing feedback loops that can, in turn, trigger sharp price moves and flash crashes. Such convergance also increases the risk of cyber attacks, as it becomes easier for cyber-criminals to influence algorithms or machines acting by the same way.
Consumer protection risks are heightened in the case of machine learning models for credit risk scoring, where the use of inadequate data can raise risks of disparate impact in lending outcomes and the potential for biased, discriminatory or unfair lending. In addition to inadvertently generating or perpetuating biases, AI-driven models make discrimination in credit allocation hard to identify, while outputs of AI models are difficult to interpret and communicate to declined prospective borrowers.
While many of the potential risks associated with AI in finance are not unique to AI, the risks are heightened because of the complexity of the techniques employed, the dynamic adaptability of AI-based models and, for the most advanced AI applications, their level of autonomy. The scale of complexity and difficulty in explaining and reproducing the decision mechanism of AI algos and models makes it challenging to mitigate these risks. Explainability issues are further aggravated by a generalised gap in technical literacy and the mismatch between the complexity that is characteristic to AI models and the demands of human-scale reasoning and interpretation that fit the human cognition.
Policy makers need to devote greater regulatory and supervisory attention to ensuring that the use of AI in finance is consistent with regulatory goals of promoting financial stability, duly protecting financial consumers and investors, and promoting market integrity and competition. Policy makers should consider sharpening their existing arsenal of defences against risks emerging from, or exacerbated by, the use of AI in finance.
The forthcoming 2021 edition of the OECD Business and Finance Outlook details risks emerging from the deployment of AI in finance and looks at the different tools policy makers have to address such risks, while supporting AI-driven innovation in the finance sector.
OECD analysis leans toward AI as a technology that augments human capabilities instead of replacing them. A combination of ‘humans and machines’, whereby AI informs human judgment rather than replaces it (decision-aid instead of decision-maker), could allow for the benefits of the technology to be realised, while maintaining safeguards of accountability and control as to the ultimate decision-making. Appropriate emphasis needs to be placed on human primacy in decision making, particularly when it comes to higher-value use-cases (e.g. lending decisions).
One question still remains open when we analyse both AI and distributed ledger technologies applied in finance: are we reaching a level of innovation that challenges the technology-neutral principle that has been at the core of financial market rulemaking in most of the developed world economies?
The 2021 OECD Business and Finance Outlook provides a non-exhaustive review of AI systems currently deployed in the financial sector across application areas in asset management, algorithmic trading, credit assessment, and blockchain-based decentralised financial services. It discusses key emerging risks related to these applications, including for data management and privacy, explainability, and the robustness and resilience of AI systems and their governance. The chapter concludes with policy recommendations that can assist policy makers support responsible AI innovation in the financial sector, while ensuring that investors and financial consumers are duly protected and that the markets around such products and services remain fair, orderly and transparent.