“Commodified solution” for AI explainability in the offing

The banking space will see widespread artificial intelligence explainability this year, according to Prema Varadhan, head of AI and chief architect at banking software company Temenos.

The move towards a commodified solution is being driven by increased customer personalisation and a focus on transparency.

“When it comes to personalisation, providing I know as a customer that my bank is able to use my data to deliver value back to me, I’m more than happy to share my data … But for banks to be able to convince their customers that yes, they are using the data in the most appropriate manner and that there is no bias, this is where the explainability comes in,” says Varadhan.

“Even though [banks] have applied AI or machine learning they should be able to show the explanation in such a way that they are able to audit and fix it if there is a problem. I think that is an important factor if banks want to apply that personalisation at scale. These are the fundamental capabilities that they need to have in order to be able to do that, then customers will put their trust in those decisions.”

Explainability has long been a hot topic in AI, entering the banking space more recently as the technology has experienced widespread adoption. “Unlike some other hyped-up technology I think AI is here to stay and it’s going to be a big differentiator,” says Varadhan. According to her, the best use cases for AI include personalisation and fraud prevention.

The search for a widespread, commodified AI explainability solution has also gained traction from bigtech, a sector that has focused on the issue in recent months. In late November, Google launched its “Explainable AI” facility to tackle transparency in the black box, or the inability to explain the outcomes of data input. Despite this step forward, Google does not claim to have solved the black box problem within AI, arguing instead that it can “help data scientists do strong diagnoses of what’s going on,” Google Cloud’s head of AI told the BBC. “But we have not got to the point where there’s a full explanation of what’s happening.”

But according to Varadhan, full explainability is possible. Temenos has a patented explainability solution “as a standard feature” employing a fuzzy logic mechanism, a logical system used to describe inherently vague concepts.

“Any model that we build, we provide explainability by default as part of the platform. And we list things – if there is a credit decisioning model that we build, when the AI model takes a decision we will have all the factors that contributed to that decision as a big list of things that have happened in the model, and then the user can actually interact with that,” says Varadhan.

“Google and others are trying to do a lot on explainability as well, so I think it will become a commodified solution where AI models can be built with full explainability. So you don’t have to worry about that being your major problem to solve.

“This year I think we will see more and more of those because the more articles we see about bias in AI decision making – that there is no transparency etc – the industry is actually responding to that. Very recently there was an article from Google about explainability and we have seen some messages like that from other technology vendors as well, so I think it is going to happen 2020. The important thing is regulators are talking about it, that has forced a decision to come out pretty quickly and something commercial to be available soon. I think this year is a big year for explainability.”

Related reading

Finance more evolution than revolutionary change