Cyberattacks are becoming increasingly common, particularly for banks and other financial institutions.
As the malicious actions of cybercriminals evolve in different ways through the use of emerging technologies, how can the good guys use the same tech – particularly Generative AI (Gen AI) – to advance their cybersecurity efforts?
How does Gen AI advance banks’ cybersecurity efforts?
Synthetic data augments financial data in a way that adds examples of unusual scenarios, which is critical for AI model training to learn the patterns in the data more efficiently. This additional data ensures the datasets are robust and can tackle some of the biggest cybersecurity challenges.
Phishing—cyberattacks largely caused by opening seemingly innocuous links in an email - was the second most common cause of an IT security breach, and also the costliest, averaging US$4.9m per breach in 2022.
One particularly hard-to-detect example is called spear phishing. In this highly targeted scenario, attackers claim to be an employee’s boss or other known employee to trick a victim into completing an urgent task, such as updating their personal information or helping themselves or a family member in need.
Instead, the money or sensitive information they send ends up directly in a hacker’s hands. The kind of content that might be used to target a CFO is often different from emails targeting engineers, and a lack of training data creates a problem uniquely suited for Gen AI.
Gen AI unlocks the ability to create this training data at scale, producing highly realistic text, images and email attachments. Security experts can then train the model with these synthetically generated emails to better detect when the actual fake emails appear in the system.
As the model trains and improves, it can reduce the rates of false positives, resulting in better detection against sophisticated attacks. In just 24 hours of training, NVIDIA was able to improve the model's ability to detect phishing attempts by 20%.
How are AI solutions expanding response mechanisms and improving legacy methods and systems?
Credit card fraud will account for an estimated global loss of US$43bn dollars by 2026, and it’s a price both businesses and consumers pay. Advances in online banking have made fraud tracking more resource-intensive for financial institutions and regulators.
Traditional methods of fraud detection, like signature comparison, rely on rules used to spot patterns that might indicate suspicious activity.
They also require significant feature engineering by a subject matter expert, increasing the time required to defend against emerging threat vectors.
With legacy systems, false positives are frequent, resulting in a loss of investigators' valuable time wasted on the wrong transactions.
Fraudsters are now savvy at avoiding these easily recognisable patterns because many are already using AI. Next-generation fraud detection can address all of these shortcomings.
Deep learning capabilities such as Graph Neural Networks (GNNs) can be used to detect fraud with significantly less labelled training data.
That’s because investigators can take a holistic view, incorporating historical data, suspicious behaviour and actual criminal activity.
They can then offload this work to AI, which will identify what is normal and what isn’t, and evaluate relationships between any number of parties to flag suspicious transactions in a way a human might miss.
What role will AI play in accelerating threat analysis?
The use of AI-driven cybersecurity solutions can greatly improve a bank's ability to defend against cyber threats. AI helps get better insights into the data banks already have and can augment systems already in place.
By using data from multiple sources and using machine learning algorithms, banks can detect malicious activity that would otherwise go unnoticed by human analysts.
Furthermore, AI technology uses the power of predictive algorithms, which allow banks to anticipate and prepare for future threats before they arise.
Security analysts are buried in mountains of data and a sea of alerts and false positives, making it impossible to keep up with all the data across a network in real time.
It’s cost-prohibitive and difficult to collect and analyse it all across a network—without employing accelerated computing and AI.
AI-enhanced threat detection allows you to identify and prioritise anomalies within troughs of security data.
This enables security analysts to act on signals that were previously impossible to identify, and with that information, they can determine the next steps quickly before significant damage occurs.
By investing in AI-driven cybersecurity solutions for incident response processes, banks can reduce damage caused by cyberattacks while preserving customer data safety and privacy.
With the correct tools, incident responses can be streamlined while predictive models can anticipate future threats.