Moody’s: AI Rollout Should be a ‘Balancing Act’ for FIs
In its latest in-depth report, Moody’s warns companies that a thorough implementation plan is vital should they want artificial intelligence (AI) to be integrated into their organisation effectively.
This applies to financial services companies too, where the pace of AI rollout represents a delicate balancing act. Integrating AI too quickly could increase the likelihood of faulty outputs while moving too slowly may compromise an FI’s competitive advantage.
Missteps can prove costly both reputationally and financially, as seen in the case of Alphabet Inc’s Gemini Assistant, which inaccurately generated images at one stage.
AI models: Balancing the ideal and the practical
To mitigate the adverse effects, Moody’s says companies should first design AI models that balance ideal aims with practicality.
Of course, designing a risk-free AI model comes with multiple difficulties, because divserse criteria need to be satisfied simultaneously.
- Robust Performance: The model must consistently deliver accurate results
- Ethical and fair: The model must align with human values
- Transparent and interpretable: Users should have access to AI system information
- Compliance with laws and regulations: The model & infrastructure must be compliant
- Secure and private: Strong cybersecurity measures must be implemented
- Energy efficient: The model should minimise energy consumption
- Resilient: The model must accommodate rapid variations in usage and data volumes
- Low maintenance: Operating AI applications should require minimal effort
- Positive financial impact: The model's benefits must outweigh the costs
Of course, many of these requirements are mutually compatible, and Moody’s says trade-offs must be made for FIs when balancing these factors most beneficially.
Unbiased output can compromise performance and powerful AI models take up large amounts of energy to run, while low-latency results are expensive. Organisations must balance these considerations against other markers when considering their AI rollouts.
Ideal, completely risk-free AI models are not feasible for most, so for financial services firms, clarity is needed regarding the risks it is willing to take to achieve strategic objectives.
- Business-to-business or business-to-consumer: B2C is higher risk
- Reputation: Reputable finservs stand to lose trust should an AI model fail
- Industry: Sensitive sectors would face heavy consequences for AI failure
- Role of AI model: What a company will allow an AI model access to
- Model complexity: Simpler AI models have fewer moving parts
- In-house development: Third-party AI models require substantial training costs
- Laws and regulations: AI in banking must adhere to complex rules
Defining an AI strategy: Avoid weakening credit quality
AI is a transformative technology, so much so that Moody’s reminds organisations that it can even transform the business models of debt issuers that it rates.
The reach of AI will extend to product offerings, productivity and investments. Organisations that fail to effectively navigate this technological shift may experience deterioration in credit quality.
Some organisations will lose their competitive edge should they get left behind by being too cautious in their AI implementation plans, deteriorating their competitive position.
This can unfold in several ways, the most obvious of which is that companies don’t adopt AI capabilities altogether. Many legacy banks face infrastructural and regulatory compliance hurdles that may hamper the speed of AI rollout – but should these not be resolved, FIs may find themselves at a competitive disadvantage.
What’s more, off-the-shelf AI applications may not deliver the level of differentiation financial services companies need. AI chatbots, for example, are already mainstream, so implementing bespoke, customised Gen AI services may be the better option, although these are more costly and complex to put in place.
- Companies could struggle to deploy AI at scale
- The performance of AI models may fall short
- Reputations could be damaged
- Users may not understand how AI models came to a prediction
AI applications have their own challenges
Lastly, Moody’s says that while AI applications have tremendous potential today, many still face their own challenges.
Where AI applications fail, they do so quietly. Indeed, some companies may only know their AI solution is not working as intended when it is highlighted in the media. This was the case for Air Canada, which was forced to issue a refund to a customer after its chatbot incorrectly said they were entitled to compensation.
The below graph highlights factors that contribute to the failure of AI systems.
The release of Moody's latest report comes after its deep-dive on the five signs of detecting financial crime at shell companies, and its AI outlook for 2024, delving into the impact of AI on financial services companies this year.
**************
Make sure you check out the latest edition of FinTech Magazine and also sign up to our global conference series – FinTech LIVE 2024.
**************
FinTech Magazine is a BizClik brand.
- Money20/20: Oracle & NVIDIA Partners Drive Fintech SurgeFinancial Services (FinServ)
- How Klarna's AI Revolution Saves Millions AnnuallyFinancial Services (FinServ)
- FinTech LIVE: Interview with Michelle He, AboundFinancial Services (FinServ)
- Taulia: AI Transforming Global Financial Decision-MakingFinancial Services (FinServ)