AI will explain it to you

Explainable AI: hit or miss be gone!

Explainable artificial intelligence (XAI) removes the last barrier in entrusting transaction monitoring to algorithms: the skepticism about its results.

At the outset of my professional career, I worked as an analyst in the anti-money laundering (AML) department of a global financial institution. I still recall boundless open spaces packed with analysts trying to determine how risky a cooperation with a particular entity actually is. Every couple of weeks we would hit another employment level… 100, 200, 300 people aboard. Back then, we were only armed with internet access to decide if a given risk can be mitigated or reported further up the chain of command.

Today, 15 years later, while I think we had a great time, virtually following our clients to Caymans, Belize or at least Wyoming, it strikes me how inefficient and fallible we were. As the current numbers show, we were also quite expensive, as the complete know your customer (KYC) process costs banks around $141,000 per FTE a year, according to LexisNexis.

Sheep in wolf’s clothing

These days, most financial institutions rely on rather simple rule-based solutions which are far from perfect due to historical patterns they are powered with. What this means is that only the transactions that are similar to the ones previously categorised as suspicious are flagged and scrutinised.

Another problem is the number of false positive alerts which the existing solutions are producing on a daily basis. According to Reuters, 80% of alerts should have never been raised. Still, each and every one of them needs to be reviewed by an AML analyst. This is how we’re turning the circle here: with an analyst trying to determine whether or not there’s any money laundering involved.

The new black

At Comarch, where I work today, we tried a new, AI-based approach to that. It stipulated a prioritisation of the transactions marked suspicious and their further pre-selection for analysis. For each flagged case or client, we estimated the risk of money laundering with the use of the most powerful classification algorithms. Transactions below a certain risk threshold would be cut off or hibernated for the sake of a significant workload reduction.

Very promising results we have achieved pushed us for further research, and so we turned to the explainable models. That’s because the biggest obstacle for the AI adoption per se was the lack of clear reasoning behind each particular prediction, which is a typical drawback.

Hit or miss begone

Not only was it challenging to determine whether a specific transaction is a criminal attempt, but even more so, to guess why an AI system considered the transaction to be suspicious or just fine. And that’s exactly where the XAI comes into picture.

Explainable means that humans can understand how an AI algorithm works without any clarification on the data processing model needed. XAI as a suite of machine learning techniques that enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners, says a definition by David Gunning, one of the most notable researchers of advanced AI systems.

XAI is the missing link between the human and the big data. XAI does more than just displaying data – it interprets the data by saying: I understand why, I understand why not, I know when to trust you, I know why you erred. This kind of approach brings a new quality to detecting financial crimes: for the first time in history, AI can actually interact with us.

Not all bases covered

The concept of artificial intelligence for banking is nothing new. While in some of the banking processes, like customer service, the use of AI leaves no doubt at least at the technical level, in others, like security and risk control, it still faces challenges, the biggest one being transparency and explainability.

This is why processes like AML or fraud prevention were long left outside the scope of AI. The institutions which actually decided to invest in AI for AML, were a bit disappointed with the results, to put it mildly. The said security and risk control factors are of utmost importance for any bank as they can make or break a bank’s reputation as a public trust institutions. AI just wasn’t enough as it couldn’t substantiate its findings whatsoever.

Verdict without evidence

The problem is, a transaction marked suspicious by a transaction monitoring system is not a sufficient premise to investigate what’s what. It can’t be a verdict in itself either – otherwise it would be a verdict without an evidence.

For this reason, if AI is to support banks in the fight against financial crime, regulators demand transparency in terms of how AI interprets its logic.

Humans are unwilling to adopt techniques that are not directly interpretable, tractable and trustworthy. We are programmed to ask why and hardly ever are we satisfied with because. When decisions derived from AI-based systems ultimately affect human lives, there is an emerging need for understanding how such decisions are furnished. The danger lies in acting upon decisions that are not justifiable, legitimate, or that simply do not allow for obtaining detailed explanations. XAI stakeholders, including regulators, managers, or users affected by model decisions claim that understanding the logic behind AI allows them to stay compliant by understanding the situation they found themselves in.

Source of truth

Unlike traditional AI, the explainable AI gives the logical path of deduction leading to particular decisions. It presents factors or sets of factors which have the greatest influence on a decision. It’s a simple logic tree employing the ifthen process. It’s a powerful weapon in the fight against fraudsters but also kind of a source of truth for the regulators.

XAI is a new dawn for financial institutions, especially in their fight with money laundering and their mission to stay secure and control the risk factor. It’s an example of an efficient cooperation between an intelligent machine and a human who guides it and sets goals for it. It’s a tool to face the fraudsters with – and win.

By Aleksandra Jarosińska, global business development manager, Comarch

Source link