Google & AIR Set Out Framework for AI Risk in Banking Sector

Share
Google Cloud has partnered with regulatory body AIR on managing Gen AI risk
Google Cloud partners with financial regulatory body AIR to propose new guidelines for managing generative AI risks in financial institutions

Financial institutions need to revamp their model risk management frameworks to account for the emergence of generative artificial intelligence, according to a new paper from Google Cloud and the Alliance for Innovative Regulation (AIR).

The paper, released by Google Cloud, Alphabet Inc’s enterprise tech division, and AIR, a non-profit organisation focused on financial regulation modernisation, estimates that generative AI (Gen AI) could contribute £270 billion (US$350bn) annually to the banking sector.

Generative AI, a form of artificial intelligence that creates new content based on pattern recognition rather than traditional data analysis, requires specific governance frameworks to manage potential risks, the paper argues.

Model risk in the AI era

"Striking a balance between harnessing its potential and mitigating its risks will be crucial for the adoption of generative AI among financial institutions," say Behnaz Kibria, Director of Government Affairs and Public Policy at Google Cloud and Jo Ann Barefoot, Co-Founder and CEO of AIR in a blog published on Google Cloud’s website.

The paper outlines how existing model risk management frameworks, which financial institutions use to assess and control risks in their decision-making tools, can be adapted for generative AI applications. These frameworks typically include validation processes, governance structures, and risk mitigation strategies.

Youtube Placeholder

Financial institutions are implementing Gen AI solutions across multiple business functions. These range from customer service automation to fraud detection systems and regulatory compliance tools. The technology differs from traditional AI systems in its ability to generate new content rather than simply analyse existing data.

Both writers emphasise that the technology sector and financial institutions must work together to ensure responsible implementation of these systems.

Regulatory clarity needed

The paper identifies three areas where regulatory guidance needs updating. These include documentation requirements for AI models, evaluation methods for AI systems, and implementation controls.

Model documentation refers to the detailed recording of how AI systems make decisions, including the data sources used and the decision-making processes involved. This documentation becomes crucial for audit trails and regulatory compliance.

Evaluation methods involve techniques such as ‘grounding,’ where AI outputs are verified against trusted sources. This process helps ensure the accuracy and reliability of AI-generated content and decisions.

“Regulators could anchor to industry best practices and standards that they consider strong – perhaps presumptive – evidence that the requirements of model risk management frameworks have been met,” Behnaz and Jo Ann say.

Implementation controls and oversight

The paper suggests that financial institutions should implement specific controls for AI systems, including monitoring protocols and human oversight. These measures aim to ensure AI systems remain within acceptable risk parameters.

Striking a balance between harnessing its potential and mitigating its risks will be crucial for the adoption of generative AI among financial institutions.

Behnaz Kibria, Director of Government Affairs and Public Policy, Google Cloud and Jo Ann Barefoot, Co-Founder and CEO of AIR

Continuous monitoring systems track AI performance and flag potential issues in real-time, while human oversight ensures decisions align with institutional policies and regulatory requirements.

Third-party management

The document also addresses the management of third-party AI providers, a crucial consideration as many financial institutions rely on external technology vendors for their AI capabilities.

The recommendations extend to shared responsibility models between financial institutions and their technology providers, outlining how risk management responsibilities should be divided. This includes clear delineation of roles in model validation, ongoing monitoring, and risk mitigation.

The paper proposes that regulators acknowledge established governance practices and provide enhanced regulatory clarity across four key areas: model governance, model development, model validation, and third-party risk management.

For financial institutions using third-party AI systems, the paper emphasises the importance of maintaining oversight while leveraging external expertise. This includes establishing clear lines of responsibility and maintaining appropriate levels of internal expertise to effectively manage these relationships.

As the report says: “Collaboration between industry participants, regulators and governmental bodies will be key.”

**************

Make sure you check out the latest edition of FinTech Magazine and also sign up to our global conference series – FinTech LIVE 2024

**************

FinTech Magazine is a BizClik brand.

Share

Featured Articles

OUT NOW! Top 100 Companies in FinTech 2024

OUT NOW! FinTech Magazine’s Top 100 Companies in FinTech for 2024

Seven Pieces of Advice from Award-Winning Fintech Execs

Discover a range of invaluable advice from award-winning fintech execs at the Global FinTech Awards 2024

Global FinTech Awards – WINNERS ANNOUNCED

FinTech Magazine is honoured to announce The Global FinTech Awards winners for its debut in 2024

We’re LIVE: FinTech LIVE London Global Summit

Financial Services (FinServ)

Global FinTech Awards 2024: One Day to Go

Tech & AI

FinTech LIVE London Global Summit: One Day to Go

Financial Services (FinServ)