How AI is transforming compliance management
How can artificial intelligence be used to improved financial services compliance? Aptean's Martin Ellingham explains
Artificial Intelligence (AI) has been around in some shape or form since the 1960s. Although now into its eighth decade, as a technology, it’s still in its relative infancy.
That’s not to say that AI hasn’t developed over the decades, of course it has, and it now presents itself not as a standalone technology but as a distinct and effective set of tools that, although not a panacea for all business ills, certainly brings with it a whole host of benefits for the business world.
As with all new and emerging technologies, wider understanding takes time to take hold and this is proving especially true of AI where a lack of understanding has led to a cautious, hesitant approach.
Nowhere is this more evident that when it comes to compliance, particularly within the financial services sector.
Until very recently, the UK’s Financial Conduct Authority (FCA) had hunkered down with its policy of demanding maximum transparency from banks in their use of AI and machine learning algorithms, mandating that banks justify the use of all kinds of automated decision making, almost but not quite shutting down the use of AI in any kind of front-line customer interactions.
But, as regulators are learning and understanding more about the potential benefits of AI, seeing first-hand how businesses are implementing AI tools to not only increase business efficiencies but to add a further layer of customer protection to their processes, so they are gradually peeling back the tight regulations to make more room for AI.
The FCA’s recent announcement of the Financial Services AI Public Private Forum (AIPPF), in conjunction with the Bank of England, is testament to this increasing acceptance of the use of AI.
The AIPFF is set to explore the safe adoption of AI technologies within financial services.
And, while not pulling back on its demands that AI technology be applied intelligently, it signals a clear move forward in its approach to the technology, recognising how financial services already are making good use of certain AI tools to tighten up compliance.
Complexity and bias
Some issues stand in the way of wider adoption of AI.
To start with, AI is inherently complex in its nature. If firms are to deploy AI, in any guise, they need to ensure they not only have a solid understanding of the technology itself but of the governance surrounding it.
The main problem here is the shortage of programmers worldwide. With the list of businesses wanting to recruit programmers no longer limited to software businesses, the shortage is getting more acute.
And, even if businesses are able to recruit AI programmers, if it takes an experienced programmer to understand AI, what hope does a compliance expert have?
For the moment, there is a still a nervousness among regulators about how they can possibly implement robust regulation when there is still so much to learn about AI, particularly when there is currently no standard way of using AI in compliance.
With time this will obviously change, as AI becomes more commonplace and general understanding increases.
And, instead of the digital natives that are spoken about today, businesses and regulators will be led by AI-natives, well-versed in all things AI and capable of implementing AI solutions and the accompanying regulatory frameworks.
SEE MORE:
As well as a lack of understanding, there is also the issue of bias.
While business have checks and balances in place to prevent human bias coming in to play for lending decisions for example, they might be mistaken in thinking that implementing AI technologies will eradicate any risk of bias emerging.
AI technologies are programmed by humans and are therefore fallible, with unintended bias a well-documented outcome of many AI trials leading certain academics to argue that bias-free machine learning doesn’t exist.
This presents a double quandary for regulators. Should they be encouraging the use of a technology where bias is seemingly inherent and if they do pave the way for the wider use of AI, do they understand enough about the technology to pinpoint where any bias has occurred, should the need arise?
With questions such as this, it’s not difficult to see why regulators are taking their time to understand how AI fits with compliance.
Complementary AI
So, where are we seeing real benefits from AI with regards to compliance, if not right now but in the near future?
AI is very good at dealing with tasks on a large scale and in super-quick time. It’s not that AI is more intelligent than the human brain, it’s just that it can work at much faster speeds and on a much bigger scale, making it the perfect fit for the data-heavy world in which we all live and work.
For compliance purposes, this makes it an ideal solution for double-checking work and an accurate detector of systemic faults, one of the major challenges that regulators in the financial sector in particular have faced in recent years.
In this respect, rather than a replacement for humans in the compliance arena, AI is adding another layer of protection for businesses and consumers alike.
When it comes to double-checking work, AI can pinpoint patterns or trends in employee activity and customer interactions much quicker than any human, enabling remedial action to be taken to ensure adherence to regulations.
Similarly, by analysing the data from case management solutions across multiple users, departments and locations, AI can readily identify systemic issues before they take hold, enabling the business to take the necessary steps to rectify practices to guarantee compliance before they adversely affect customers and before the business itself contravenes regulatory compliance.
Similarly, when it comes to complaint management for example, AI can play a vital role in determining the nature of an initial phone call, directing the call to the right team or department without the need for any human intervention and fast-tracking more urgent cases quickly and effectively.
Again, it’s not a case of replacing humans but complementing existing processes and procedures to not only improve outcomes for customers, but to increase compliance, too.
At its most basic level, AI can minimise the time taken to complete tasks and reduce errors, which, in theory, makes it the ideal solution for businesses of all shapes, sizes and sectors. For highly regulated industries, where compliance is mandatory, it’s not so clear cut.
While there are clearly benefits to be had from implementing AI solutions, for the moment, they should be regarded as complementary technologies, protecting both consumers and businesses by adding an extra guarantee of compliant processes.
While knowledge and understanding of the intricacies of AI are still growing, it would be a mistake to implement AI technologies across the board, particularly when a well-considered human response to the nuances of customer behaviours and reactions play such an important role in staying compliant.
That’s not to say that we should be frightened of AI, and nor should the regulators.
As the technology develops, so will our wider understanding. It’s up to businesses and regulators alike to do better, being totally transparent about the uses of AI and putting in place a robust, reliable framework to monitor the ongoing behaviour of their AI systems
This article was written by Martin Ellingham director, product management compliance at Aptean
For more information on all topics for FinTech, please take a look at the latest edition of FinTech magazine.