What happens when there is no explainability? 

What happens when there is no explainability? 

Explainability
Explainability

KK Gupta

5 Jul 2022

If the regulator or client cannot see how decisioning works, they have no reason to trust it. Or to understand these decisions against governance goals. The need is not only to be confident of the capabilities the AI confers but also the methods it uses.

This is especially important when it comes to trust, fairness, and the confidence that bias, conscious or unconscious, and false positives have been tackled appropriately. Corporate clients and consumers must have faith that they are not being discriminated against and the decisions have been based on a fair and transparent methodology. 

This feeds into risk management for the financial institution and accurately making decisions surrounding credit risk, fraud, suitability, and the like. The AI needs to show how it identified the needle in the haystack and what made it different from all the other needles. It then should be able to learn from that to correctly identify other such needles that might come up in the future. Pertinently it also needs to know when it does not know and ask for help to again learn from the past going forward.

Those using the output of AI also need to know how it works. If analysts do not understand how a decision was reached, they are less able to make informed decisions. Institutions, having invested heavily into AI, want to see a return. That probably means using AI-powered automation to free up humans to do value-added tasks. In reducing false positives, for example, the output for humans to investigate needs to be finely tuned and accurate – thus removing a workload than would previously have been done at least partially manually.

It also means that the data scientists can be freed up from interpreting outputs to instead refine and hone algorithms and processes and even work to build out the application of AI into other business streams. This shift of focus is typically results in more interesting science – and therefore helps with the employment retention of highly skilled data scientists. 

As trust is important to consumers, so it is too for organisations. Having clarity over why a decision has been made and being able to spot when a decision is correct but the reasoning is flawed is one element. Taking steps to resolve the issue promotes transparency and trust and a sense of control too. It means the ‘bad decisioning behaviour’ can be corrected, and the system learns not to do it again. 

All of this encourages AI adoption and acceptance; trust through understanding through transparency is key for uptake. 

AI