What is explainability, and why is it important?

What is explainability, and why is it important?

What is explainability
What is explainability

KK Gupta

29 Jun 2022

Explainability, on a basic level, is showing how an algorithm reached the decision or conclusion that it did. It is not dissimilar to trying to work out the ingredients of a dish without knowing the recipe.  You probably know what’s in it but you can’t be certain.  

If AI is being used to make decisions over, say, financial crime or credit risk, or suitability, then transparency is key to avoid bias, unconscious or conscious, and to satisfy the client and regulator that the end output is appropriate and just. 

Until recently, this transparency issue had been largely passed over. Risk technology vendors were unwilling to reveal their proprietary algorithms, and interactions between different layers of decisioning were complex and challenging to pull out. And in any case, the uses and reach of AI were limited. 

This is no longer the case. While AI is still in its early to middle stages of adoption, the pace of take up is rapidly accelerating is on the up and fast becoming the norm. According to Gartner, by 2025, 30% of government and large enterprise contracts for the purchase of AI products and services will require the use of explainable and ethical AI.  

Indeed, some of today’s AI tools can be highly complex, if not outright opaque. Workings of complex statistical pattern recognition algorithms, for example, can become too difficult to interpret and so-called ‘black box’ models can be too complicated for even expert users to understand fully. 

But if the output of AI cannot be explained and justified, then there is a transparency issue. This casts doubt over whether the modelling used is reliable, based on up-to-date data, free of bias, accurate, and therefore fit for purpose. This does not work in an industry that is tightly regulated, where firms need to show the process or path, they have followed to the regulator to achieve compliance and generally operate with an ethos of always acting proportionately, in the context of risk, which doesn’t always means doing exactly what the client wants or expects, and doing so transparently. 

The cost of performing manual tasks and processing large volumes of data is high, and humans making decisions based on inference rather than fact is hardly suitable in today’s world. The adage that the same set of circumstances would receive a different choice on a Tuesday morning to a Friday afternoon is not entirely untrue. But the potential cost of getting a suitability or fraud decision wrong due to an algorithm being hard to read is much, much higher. 

Regulators are taking notice of this and looking not just at the possibilities afforded by AI but also whether they are positive or not. Coming to the fore is the thinking that automation and AI is only valuable if it is transparent and explainable. 

Regulators worldwide are increasingly vocal about the importance of understanding why the algorithm reached a particular conclusion.  The UK’s Financial Conduct Authority (FCA) defines AI transparency as ‘stakeholders having access to relevant information about a given AI system.’ 

Regulators need to have faith in financial institutions, understand the approach the mechanics and be confident that the process is ethical, fair, and ultimately provable.  

The same issue applies as regards clients. Clients will happily take their business elsewhere without trust and faith in financial institutions. And if there is no confidence on a systemic basis with AI as a principle, being rejected based on lack of transparency, then further industry take up will be poor. This is bad news for something that gets better and better the more data it has and the more it evolves and progresses. 

Ultimately then, if AI in all its guises is to be acceptable to regulators and clients and therefore be viable to use, then it needs to be transparent and explainable.  

AI