Last updated: 21 Oct 2024 08:00 Posted in: AIA
Henry Wyard explains how AI can change three key data-driven processes that underpin contemporary anti-financial crime practice.
It’s fast becoming accepted fact that an artificial intelligence revolution has arrived, bringing with it transformational change for all areas of social, economic and political life.
It is unsurprising that businesses which already have longstanding dependencies on technological innovation are actively exploring new AI possibilities. A 2024 survey of financial institutions, for example, found that that 91% of respondents were either already using AI or evaluating its potential utilisation.
It is natural that AI would be of great interest for anti-financial crime, given that it exists in the space between national security and financial services, both of which areas have already embraced AI innovation.
AI transformations
What has driven these skyrocketing levels of attention paid to AI? Most accounts attribute the contemporary AI explosion to significant improvements in three areas
Firstly, new ways of building AI models (network architectures) have been developed. These new architectures have been combined with two further improvements – increases in the quantity of available training data, and continuing growth in the power of processing units – to produce dramatic increases in the range and sophistication of AI technologies.
In broad terms, the likely transformative effects of AI innovation can be divided into two categories:
Phrased even more simply, AI can transform how much one can do in a given amount of time, and it can also transform the types of things that one is able to do.
Contemporary anti-financial crime and AI
But what do these transformations mean for anti‑financial crime? How can businesses practically take advantage of the new possibilities open to them?
Ultimately, these questions can be answered by understanding how AI can change three key data-driven processes that underpin contemporary anti-financial crime practice. We can conceptualise these as:
Each of these three tasks faces serious challenges. The broad scope of anti-money laundering (AML) and counter-terrorism financing legislation, along with the wide range of potentially relevant data, mean that the resource required to perform these processes effectively is increasingly large.
Indeed, compliance with anti-financial crime legislation has never been so expensive. In 2023, the global costs of compliance reached an estimated $206 billion.
AI technologies have the potential to transform how all three of these anti-financial crime data processes are conducted.
All AML-regulated businesses are obliged to gather data about their clients (and the nature of their clients’ financial activities) not just at the outset of a commercial relationship, but throughout. Even for small enterprises, this can present a daunting amount of information that requires processing.
Moreover, this client information must be referenced against numerous datasets pertaining to financial crime risk. Perhaps the most significant of these are lists of individuals and entities subject to financial sanctions.
Failures to implement these carry the threat of severe penalties, including imprisonment. Yet sanctions lists are just one of many sources against which client profiles must be assessed: others include lists of politically exposed persons, litigation history and adverse media reporting.
AI solutions
One data collection solution already in established use is automated client screening. Advanced systems can use AI to improve searches and continuously update the risk databases against which client profiles are screened, eliminating the need for costly manual intervention.
However, AI collection of structured data – the formatted, standardised information contained in risk databases and sanctions lists – is only part of the story. Natural language processing (NLP) techniques allow for the analysis of unstructured data sources, which can help to connect entities, individuals and criminal activity, as well as to identify adverse media.
AI-enabled collection of unstructured data from a much wider range of sources (for example, neighbourhood watch groups, press releases from law enforcement and news articles) presents human anti-financial crime analysts with immediate access to a much richer pool of specific, useful information to enhance risk screening and investigation processes.
Yet even if the data collection processes involved in anti-financial crime were fully optimised, an unavoidable problem remains as to how to verify the accuracy and truthfulness of that data.
While there are additional verification measures that businesses can take to counteract these sorts of financial crime typologies (particularly in the form of enhanced due diligence investigations carried out laboriously by skilled intelligence professionals), they add an undeniable cost burden.
AI solutions
An area of data verification that has been subject to focused AI development is electronic identity verification (eIDV), which uses machine learning techniques to automatically determine whether both documentation and individuals themselves are genuine. These have proved highly successful – the experience of photographing pieces of ID and taking selfies to access online financial services is now a familiar experience for many.
Yet eIDV technology is not impervious to criminal abuse. In February 2024, 404 Media reported on a website named OnlyFake, which generated photos of fake IDs for a mere $15. Journalists used the site to generate an entirely fake piece of ID that was convincing enough to pass the eIDV system of a cryptocurrency exchange.
Even advanced contemporary solutions to data verification face serious challenges from fraud and forgery. While AI technology has already contributed significantly to improving ID verification, there remains much scope for further development.
In the context of anti-financial crime, data interpretation involves performing analysis of information collected about clients to determine the appropriate level of financial crime risk that should be applied, and subsequently to detect suspicious activity and transactions.
Automated transaction monitoring has consequently been a feature of AML systems for decades – the UK’s Financial Conduct Authority (FCA), for instance, published a best practice note on the technology in. Most current automated systems apply statistical models and machine learning to profile client behaviour (assessing current activity against past activity of similar clients) and screen transactions against hundreds if not thousands of pre-defined rules and thresholds.
Even the most sophisticated systems, however, involve a time delay for the initial analysis processes. Once a threat analysis has been performed, more time is required to incorporate new typologies and trends into monitoring systems. Financial criminals can – and do – exploit this time lag to remain one step ahead.
AI solutions
A concrete example of how novel methods of AI analysis offer a game-changing edge in tackling financial crime can be seen in the application of graph neural network deep learning techniques to financial crime datasets.
The expression of data in graph form, by displaying relationships between entities in a network of edges and nodes, allows financial crime investigators to uncover hidden connections in the information they possess.
Beyond the link analysis of a single network that a human expert can perform, graph neural network models have the potential to identify patterns across an entire dataset of networks (see tinyurl.com/4h54tesu). In a financial crime context, risk alerts might be generated simply due to the similarity of a client’s network structure to those known to be fraudulent or criminal. These sorts of insights have never existed before. Their realisation by new AI solutions would mark a genuine change in the financial crime landscape.
Conclusion
AI solutions alone do not offer a panacea for the various issues of financial crime. AI-based processes provide a powerful tool in combating financial crime, but their effectiveness ultimately depends on who uses them and how they are used.
It is sometimes extremely challenging, if not impossible, to fully understand how some AI models function and why they produce certain results. This issue, often referred to as the ‘explainability’ problem of ‘black box’ AI models, is a major obstacle to AI’s full implementation in compliance systems. Money laundering reporting officers and senior executives must make decisions about potentially denying services to suspicious clients and reporting them to legal authorities; it is vital that they are able to justify and explain their actions.
The increasing use of advanced AI models in the anti-financial crime sphere therefore brings with it a growing need for experts who can bridge the gap between humans and machines. Human experts are crucial not only in training and enhancing AI compliance systems but also in conducting further verification and interpretation of the results produced by complex models. Only by embedding AI within existing anti-financial crime expertise can we turn the very real promise the technology holds into reality.
Author biography
Henry Wyard is a Financial and Environmental Crime Researcher at Themis.