In the age of machine learning, when relying on algorithms that consume, process, and ultimately deliver a verdict on large amounts of data, it can be difficult to understand and explain what is happening. behind the scenes – and why the result is what it is.
Featurespace product manager Richard Graham told PYMNTS that explainability of models is critical in financial services, and indeed in any other vertical that relies on advanced technologies to make day-to-day decisions that affect lives. people’s daily lives. At a high level, model explainability is simply the concept of being able to understand what happens when inputs are ultimately turned into outputs.
Many artificial intelligence (AI) solutions produce a decision, but they often don’t shed light on the logic behind it, Graham said.
Take apply for a job. You submit an online CV with a wide range of details that paint a picture of career progression and aspirations. Next comes an email from HR that simply states that the application will not move forward. But that’s all; there is no explanation as to why the decision was made.
“The explainability of the model takes inputs, processes the data, and then is able to give outputs of how it arrived at the conclusion,” Graham told PYMNTS. “It’s the useful and valuable information that companies can then use to justify the accuracy of technology decision-making.”
This level of transparency – understanding the “why” and not just the “what” – is essential in financial services, where banks and other financial institutions (FIs) collect billions of data points on millions of customers, linked to hundreds of millions of transactions.
An effective backup
The increase in fraud and money laundering attempts makes this more important than ever. Legacy technology is good for spitting out explainable rules, Graham said — but all FIs, amid the big digital shift, are looking to introduce more machine learning and behavioral analytics to their customers.
It’s easier said than done.
“Some of the barriers I’ve seen for FIs when adopting new technologies have to do with trust,” he said. “Can you trust these new models and algorithms to derive meaningful insights from the important information that is coming in – and more importantly, can you be sure that they are better than the existing rules that are already in legacy technology?”
Explainability can boost those levels of trust and reduce some of the false positives that characterize existing legacy processes, he said.
The goals of model explainability
A well-designed model will show the user all the data they used to come to a conclusion, Graham said. For example, it will look at whether a banking app user has logged in multiple times from different locations and determine if spending habits have changed. To dig deeper into the cascading volumes of online payments in the pandemic era, FIs need to weigh the evidence related to red flags more carefully.
This “will give the investigator fewer false positives, and they’ll have a better understanding of the different types of fraudulent activity that’s happening, and they’ll get more complete information on which to base their decisions,” Graham said.
Ultimately, this will also benefit end customers. For example, a bank may indicate to that end customer – beyond reporting account behavior that might be “several standard deviations” beyond what might be typical spending – that someone impersonating he logged in from a new location from which he never transacted. .
In 2022 and beyond, banks will be able to leverage risk and fraud management as a competitive and strategic advantage, he said.
“Fraud will force every bank to fight for its reputation, and financial institutions are already beginning to make it clear that they have anti-fraud controls in place to better protect customers,” he said. “Fraud prevention technology will be a differentiator in 2022.”