When Rules Are Cool – Why Rules-Based Tech is Better for Matching Use Cases

Today, everyone is talking about AI – it seems that no matter what the use case is, AI is supposed to be the answer. So, many people are surprised that a cutting-edge RiskTech company like Facctum uses a rules-based technology instead of AI within its core.

Facctum uses rules-based technology because it is the most effective technology for the use case our solution is seeking to address – matching to identify financial criminals and sanctioned individuals and entities. We are doing new and exciting things with this approach and have overcome the disadvantages of established “old-fashioned” rules-based solutions.

A different data architecture

Rules-based approaches have earned a poor reputation because many rules-based solutions that run on older technology are in danger of collapsing under their own weight. These solutions have been around for many years, and layers of rules have built up as regulatory change has resulted in continuous new compliance demands, and as new risks have emerged. However, because of the way these older solutions are constructed, when new rules were added on top of old ones, the overall stack usually becomes unstable. Like a Jenga puzzle, it rapidly becomes challenging to modify a rule within the stack without a serious risk of the whole thing comes crashing down.

Facctum has solved this challenge by creating a data architecture for its financial crime software solution that lets firms have many complex rules in operation simultaneously while enabling firms to make changes to those rules without the fear of unintended consequences. This gives firms unprecedented flexibility and agility to address new regulatory obligations and emerging risks quickly and easily.

The importance of explainability

Facctum chose to build its financial crime solution using rules-based technology because we believe it is the best technology for the job. A data architecture that enables the implementation of many powerful rules in a connected way – without creating code complexity – leads to better outcomes. In fact, our rules-based approach delivers risk detection outcomes faster than an AI approach.

Good financial crime risk management derives from decisions that are accessible and explainable.  In this context, AI-based technologies struggle to provide an adequate explanation of why a customer or transaction might pose a compliance risk.  Poorly understood outcomes can cause compliance uncertainty, missed payment cut-offs and client friction. In contrast, using a rules-based approach with fully deterministic decisioning in an innovative data architecture provides clear risk decisions that are fully and quickly explainable. The key here is that the deterministic technology approach enables Facctum clients to trace back each decision to a rule or several rules. Firms have detailed transparency into the logic behind every decision that is taken. This is of critical importance for compliance teams today, who need to be able to support their decisions to regulators and customers.

In contrast, with AI, it can sometimes be quite difficult to unwind the thinking behind the decisions that the machine has taken. To explain AI decision-making to a customer or a regulator can prove to be even more difficult, as it can involve discussing the nature of the algorithms used and how the AI has applied those algorithms in an individual case.

Translating into Technology

Today, firms can employ Facctum rules-based data architecture to screen customers or transactions for a full spectrum of risks on an agile, accurate and highly scalable platform. Firms can align complex list management policies to operational workflow, and build, model and deploy matching and scoring rules for their specific risk profile.

True innovation is about using technology to deliver better outcomes for a specific use case. With Facctum, the innovation is in how the rules-based data architecture is constructed.

So, although “rules-based” does not sound as sexy as AI, the reality is that it meets the demands of the use case much better and delivers impressive results that stand up to scrutiny.

Making AI Explainability better – what could/should things look like?

There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework. 

Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.  

But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.  

For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases. 

In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints.  This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech.  With white box, the purchaser has the blue prints. 

Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one! 

Posted in AI

What happens when there is no explainability? 

If the regulator or client cannot see how decisioning works, they have no reason to trust it. Or to understand these decisions against governance goals. The need is not only to be confident of the capabilities the AI confers but also the methods it uses.

This is especially important when it comes to trust, fairness, and the confidence that bias, conscious or unconscious, and false positives have been tackled appropriately. Corporate clients and consumers must have faith that they are not being discriminated against and the decisions have been based on a fair and transparent methodology. 

This feeds into risk management for the financial institution and accurately making decisions surrounding credit risk, fraud, suitability, and the like. The AI needs to show how it identified the needle in the haystack and what made it different from all the other needles. It then should be able to learn from that to correctly identify other such needles that might come up in the future. Pertinently it also needs to know when it does not know and ask for help to again learn from the past going forward.

Those using the output of AI also need to know how it works. If analysts do not understand how a decision was reached, they are less able to make informed decisions. Institutions, having invested heavily into AI, want to see a return. That probably means using AI-powered automation to free up humans to do value-added tasks. In reducing false positives, for example, the output for humans to investigate needs to be finely tuned and accurate – thus removing a workload than would previously have been done at least partially manually.

It also means that the data scientists can be freed up from interpreting outputs to instead refine and hone algorithms and processes and even work to build out the application of AI into other business streams. This shift of focus is typically results in more interesting science – and therefore helps with the employment retention of highly skilled data scientists. 

As trust is important to consumers, so it is too for organisations. Having clarity over why a decision has been made and being able to spot when a decision is correct but the reasoning is flawed is one element. Taking steps to resolve the issue promotes transparency and trust and a sense of control too. It means the ‘bad decisioning behaviour’ can be corrected, and the system learns not to do it again. 

All of this encourages AI adoption and acceptance; trust through understanding through transparency is key for uptake. 

What is explainability, and why is it important?

Explainability, on a basic level, is showing how an algorithm reached the decision or conclusion that it did. It is not dissimilar to trying to work out the ingredients of a dish without knowing the recipe.  You probably know what’s in it but you can’t be certain.  

If AI is being used to make decisions over, say, financial crime or credit risk, or suitability, then transparency is key to avoid bias, unconscious or conscious, and to satisfy the client and regulator that the end output is appropriate and just. 

Until recently, this transparency issue had been largely passed over. Risk technology vendors were unwilling to reveal their proprietary algorithms, and interactions between different layers of decisioning were complex and challenging to pull out. And in any case, the uses and reach of AI were limited. 

This is no longer the case. While AI is still in its early to middle stages of adoption, the pace of take up is rapidly accelerating is on the up and fast becoming the norm. According to Gartner, by 2025, 30% of government and large enterprise contracts for the purchase of AI products and services will require the use of explainable and ethical AI.  

Indeed, some of today’s AI tools can be highly complex, if not outright opaque. Workings of complex statistical pattern recognition algorithms, for example, can become too difficult to interpret and so-called ‘black box’ models can be too complicated for even expert users to understand fully. 

But if the output of AI cannot be explained and justified, then there is a transparency issue. This casts doubt over whether the modelling used is reliable, based on up-to-date data, free of bias, accurate, and therefore fit for purpose. This does not work in an industry that is tightly regulated, where firms need to show the process or path, they have followed to the regulator to achieve compliance and generally operate with an ethos of always acting proportionately, in the context of risk, which doesn’t always means doing exactly what the client wants or expects, and doing so transparently. 

The cost of performing manual tasks and processing large volumes of data is high, and humans making decisions based on inference rather than fact is hardly suitable in today’s world. The adage that the same set of circumstances would receive a different choice on a Tuesday morning to a Friday afternoon is not entirely untrue. But the potential cost of getting a suitability or fraud decision wrong due to an algorithm being hard to read is much, much higher. 

Regulators are taking notice of this and looking not just at the possibilities afforded by AI but also whether they are positive or not. Coming to the fore is the thinking that automation and AI is only valuable if it is transparent and explainable. 

Regulators worldwide are increasingly vocal about the importance of understanding why the algorithm reached a particular conclusion.  The UK’s Financial Conduct Authority (FCA) defines AI transparency as ‘stakeholders having access to relevant information about a given AI system.’ 

Regulators need to have faith in financial institutions, understand the approach the mechanics and be confident that the process is ethical, fair, and ultimately provable.  

The same issue applies as regards clients. Clients will happily take their business elsewhere without trust and faith in financial institutions. And if there is no confidence on a systemic basis with AI as a principle, being rejected based on lack of transparency, then further industry take up will be poor. This is bad news for something that gets better and better the more data it has and the more it evolves and progresses. 

Ultimately then, if AI in all its guises is to be acceptable to regulators and clients and therefore be viable to use, then it needs to be transparent and explainable.