What EU AI Act Means for Governance in Financial Sector

Share
What EU AI Act Means for Governance in Financial Sector
Asset managers and fintechs face new compliance rules as €35m (US$36.2m) penalties loom for breaches of EU's sweeping AI regulations

The EU AI Act's first enforcement deadline arrived on 2 February 2025, marking a shift in how financial institutions must approach artificial intelligence deployment and governance. 

While the legislation was formally adopted in March 2024 and entered into force last August, 2 February represents the first major milestone for compliance, introducing requirements that will reshape how financial services firms develop and deploy AI systems.

Under the new rules, financial institutions must now comply with AI literacy requirements and adhere to specific prohibitions on AI systems. 

The regulations affect all companies operating in the EU, regardless of where they are headquartered, with penalties for non-compliance reaching up to €35m (US$36.2m) or 7% of annual global turnover.

Jamil Jiva, Linedata

“The impact of the European Union's AI Act is not yet well understood by asset managers,” says Jamil Jiva, Global Head of Asset Management at Linedata, speaking exclusively to FinTech Magazine. 

Research by Linedata indicates that 36% of asset managers currently use AI, with an additional 37% planning to expand its usage. 

The combination suggests that nearly three-quarters of asset management firms will need to assess their compliance with the new regulations.

What changes now

The 2 February deadline introduces immediate prohibitions on specific AI applications. 

Financial institutions can no longer use systems for biometric categorisation, such as determining someone's ethnicity or political views from physical features. 

The Act also bans emotion recognition systems, which could have impacted customer service applications, and social scoring mechanisms that might influence hiring decisions based on factors like ethnicity or place of birth.

Companies must ensure employees have appropriate levels of AI literacy, with requirements varying by role. 

Those in areas such as legal services require “advanced proficiency”, while marketing departments need only “basic awareness”, according to the legislation. 

Companies must track internal training and assessments, with regulators empowered to audit these records.

Practical implementation and training

The 2 February deadline brings immediate requirements for AI literacy across financial institutions. 

Organisations must now categorise roles based on their interaction with AI systems and provide appropriate training. 

For instance, risk managers and compliance officers who rely on AI for decision-making require comprehensive understanding of model limitations and potential biases.

“The EU AI Act's requirements around bias detection, regular risk assessments, and human oversight aren't limiting innovation"

Diyan Bogdanov, Director of Engineering Intelligence & Growth, Payhawk

Morné Rossouw, Chief AI Officer at treasury management software provider Kyriba, emphasises the scale of required changes: “Finance teams now face the challenge of ensuring transparency and documentation in AI systems, particularly those for payments and fraud detection. 

“There's a strong emphasis on ethical use and bias mitigation, requiring systems to remain fair and unbiased.”

The focus on data quality and governance necessitates high-quality datasets for forecasting and risk models. 

“The formalisation of human oversight is set to enhance decision-making, with AI agents providing support but under human supervision,” Morné adds.

Parallels with GDPR

Industry executives draw comparisons between the AI Act and the General Data Protection Regulation (GDPR) introduced in 2018. 

“Much like the EU AI Act, GDPR brought sweeping changes, requiring firms across industries to overhaul their data governance practices,” says Jamil. 

He suggests that while many companies initially viewed GDPR as an obstacle to innovation, it eventually became a competitive advantage by demonstrating commitment to privacy and data protection.

Morné notes that the regulatory shift comes amid increasing AI investment: “Nearly 70% of business leaders plan to invest $50m to $250m in AI over the next year, a notable increase from 51% last year. 

“The integration of AI agents into financial operations is becoming more prevalent, offering new avenues for automating routine tasks and enhancing analytical capabilities.”

Focus on targeted solutions

Financial institutions are responding by implementing what Diyan Bogdanov, Director of Engineering Intelligence & Growth at spend management platform Payhawk, calls “right-sized AI.”

This approach emphasises specific AI applications rather than general-purpose solutions.

Diyan Bogdanov, Payhawk

“When analysing company spending patterns, monitoring expense policy compliance, or detecting fraud, we need AI systems that are self-explanatory, work effectively out of the box, and maintain strict data protection standards,” says Diyan. 

“There's simply no room for black-box decisions or unpredictable outcomes in financial operations.”

The legislation particularly impacts high-risk applications in finance, including credit scoring and insurance pricing. 

These systems require robust human oversight and clear escalation paths for edge cases, alongside comprehensive audit trails of AI-assisted decisions.

Diyan emphasises that this targeted approach aligns with regulatory requirements: “The EU AI Act isn't just another compliance burden — it's a framework for building better AI systems, particularly in financial services. 

“By classifying finance applications like credit scoring and insurance pricing as 'high-risk,' the Act acknowledges what we've long believed: when it comes to financial services, AI systems must be purposeful, precise, and transparent.”

The transition to targeted AI solutions requires significant investment in infrastructure and expertise. 

Financial institutions must balance immediate compliance needs with long-term strategic goals, particularly as AI integration becomes more prevalent in operations.

Investment implications

The timing of the legislation coincides with asset managers increasing their focus on alternative investments due to expectations of continued low returns in public markets. 

“With return of investment on public securities likely to remain low for some time, we will see both asset and wealth managers pile into alternative assets,” Jamil tells FinTech Magazine.

He explains that generative AI can help analyse complex securities like collateralised loan obligations (CLOs) - pooled debt instruments - and catastrophe bonds, which are insurance-linked securities. 

“When you would otherwise be forced to find the underlying conditions of a security in a PDF report, AI can make unstructured data understandable,” he says.

The regulations also introduce requirements for explainable artificial intelligence (XAI) in risk assessment systems. 

This development allows financial services professionals to understand how AI reaches decisions, addressing previous concerns about opacity in AI systems. XAI encompasses processes and methods that make AI decision-making transparent and auditable.

"AI allows asset managers to tap into a wealth of previously unexploited customer data and internal information"

Jamil Jiva, Global Head of Asset Management, Linedata

Morné adds: “Finance teams now face the challenge of ensuring transparency and documentation in AI systems, particularly those for payments and fraud detection. 

“AI agents, in particular, must adhere to these standards, ensuring they operate transparently and without bias.”

The use of AI in alternative investments introduces new compliance considerations. “In the past, Gen AI was seen as a mysterious black box of unknowable algorithms,” Jamil adds. 

“It was impossible to know how the technology arrived at a decision or result, making it difficult to understand and impossible to audit.”

This transformation extends to unstructured data analysis. “With Gen AI, unstructured data like text documents, emails, and other difficult-to-analyse data formats can for the first time be accessed and processed using retrieval-augmented generation,” Jamil explains. 

“In this way, AI allows asset managers to tap into a wealth of previously unexploited customer data and internal information.”

Governance frameworks and implementation

Financial institutions are reviewing their governance structures to ensure compliance. 

The focus on governance extends to data quality requirements, with companies required to maintain high-quality datasets for forecasting and risk models while ensuring human oversight of AI systems.

Morné Rossouw, Kyriba

Diyan outlines three principles for compliance: “First, purposeful design - each AI implementation must serve a clear role with defined responsibilities. 

“Second, human-centric architecture - AI agents should operate as an additional channel alongside existing tools and processes. Third, built-in governance - security and governance can't be afterthoughts.”

"While the US and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones," Diyan adds. 

“The EU AI Act's requirements around bias detection, regular risk assessments, and human oversight aren't limiting innovation — they're defining what good looks like in financial services AI.”

The implementation of governance frameworks requires a multi-layered approach. Diyan outlines specific requirements: “AI systems must maintain strict data protection standards, provide comprehensive audit trails, and operate within clearly defined boundaries. 

“Organisations need to establish proper user permissions, ensuring AI agents can only perform actions based on appropriate authorisation levels.”

Financial institutions must also consider the global implications of their AI governance structures. “As global markets increasingly demand transparent, accountable AI systems, Europe's approach will likely become the de facto standard for financial services worldwide,” Diyan notes.

“In 2025, we will see the adoption of Explainable Artificial Intelligence in risk assessment and management systems," says Jamil.

“Finance teams now face the challenge of ensuring transparency and documentation in AI systems, particularly those for payments and fraud detection"

Morné Rossouw, Chief AI Officer, Kyriba

“This set of processes and methods will increasingly allow financial services professionals to use AI with greater levels of trust and an understanding of biases and limitations.”

Looking ahead

The implementation timeline extends beyond the initial February deadline. 

Jamil adds: “Without the help of Gen AI, it is well known that private trading can be less transparent. 

“As investors seek transparency around the securities and funds they are investing in, 2025 will see AI used to translate data about these private and esoteric securities.”

The phased approach to regulation allows companies to adapt gradually. 

Jamil concludes: “As firms begin to reap the rewards of Gen AI in 2025, they will have to face new regulations mandating guardrails for AI's use. 

“Striking the right balance between innovation and compliance will be essential for firms that want to maintain their competitive edge without falling afoul of regulatory bodies.”


Explore the latest edition of FinTech Magazine and be part of the conversation at our global conference series, FinTech LIVE

Discover all our upcoming events and secure your tickets today.


FinTech Magazine is a BizClik brand 

Share

Featured Articles

How Visa Foundation and INCO Will Empower Female-led SMBs

The CatalyseHer programme will provide 500 women entrepreneurs with community building, expert training and micro-grants to support women-led businesses

Accelerating Claims with AI: From FNOL to Settlement

Learn how insurers use AI to cut claims processing times and costs while delivering better customer experiences in this expert-led session

Mphasis: Open Banking Will Rewrite Financial DNA of Society

From democratising finance in developing nations to reshaping monetary policy, Mphasis's Devalekar reveals his vision for banking's open-data revolution

This Week’s Top 5 Stories in the Fintech Industry

Financial Services (FinServ)

How UBS Deploys Microsoft AI Platform in Global Operations

Banking

WEF: Global Finance Leaders Back Bitcoin as Reserve Asset

Crypto