Making AI Explainability better – what could/should things look like?

There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework. 

Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.  

But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.  

For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases. 

In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints.  This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech.  With white box, the purchaser has the blue prints. 

Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one! 

Posted in AI

What happens when there is no explainability? 

If the regulator or client cannot see how decisioning works, they have no reason to trust it. Or to understand these decisions against governance goals. The need is not only to be confident of the capabilities the AI confers but also the methods it uses.

This is especially important when it comes to trust, fairness, and the confidence that bias, conscious or unconscious, and false positives have been tackled appropriately. Corporate clients and consumers must have faith that they are not being discriminated against and the decisions have been based on a fair and transparent methodology. 

This feeds into risk management for the financial institution and accurately making decisions surrounding credit risk, fraud, suitability, and the like. The AI needs to show how it identified the needle in the haystack and what made it different from all the other needles. It then should be able to learn from that to correctly identify other such needles that might come up in the future. Pertinently it also needs to know when it does not know and ask for help to again learn from the past going forward.

Those using the output of AI also need to know how it works. If analysts do not understand how a decision was reached, they are less able to make informed decisions. Institutions, having invested heavily into AI, want to see a return. That probably means using AI-powered automation to free up humans to do value-added tasks. In reducing false positives, for example, the output for humans to investigate needs to be finely tuned and accurate – thus removing a workload than would previously have been done at least partially manually.

It also means that the data scientists can be freed up from interpreting outputs to instead refine and hone algorithms and processes and even work to build out the application of AI into other business streams. This shift of focus is typically results in more interesting science – and therefore helps with the employment retention of highly skilled data scientists. 

As trust is important to consumers, so it is too for organisations. Having clarity over why a decision has been made and being able to spot when a decision is correct but the reasoning is flawed is one element. Taking steps to resolve the issue promotes transparency and trust and a sense of control too. It means the ‘bad decisioning behaviour’ can be corrected, and the system learns not to do it again. 

All of this encourages AI adoption and acceptance; trust through understanding through transparency is key for uptake.