When Rules Are Cool – Why Rules-Based Tech is Better for Matching Use Cases

Today, everyone is talking about AI – it seems that no matter what the use case is, AI is supposed to be the answer. So, many people are surprised that a cutting-edge RiskTech company like Facctum uses a rules-based technology instead of AI within its core.

Facctum uses rules-based technology because it is the most effective technology for the use case our solution is seeking to address – matching to identify financial criminals and sanctioned individuals and entities. We are doing new and exciting things with this approach and have overcome the disadvantages of established “old-fashioned” rules-based solutions.

A different data architecture

Rules-based approaches have earned a poor reputation because many rules-based solutions that run on older technology are in danger of collapsing under their own weight. These solutions have been around for many years, and layers of rules have built up as regulatory change has resulted in continuous new compliance demands, and as new risks have emerged. However, because of the way these older solutions are constructed, when new rules were added on top of old ones, the overall stack usually becomes unstable. Like a Jenga puzzle, it rapidly becomes challenging to modify a rule within the stack without a serious risk of the whole thing comes crashing down.

Facctum has solved this challenge by creating a data architecture for its financial crime software solution that lets firms have many complex rules in operation simultaneously while enabling firms to make changes to those rules without the fear of unintended consequences. This gives firms unprecedented flexibility and agility to address new regulatory obligations and emerging risks quickly and easily.

The importance of explainability

Facctum chose to build its financial crime solution using rules-based technology because we believe it is the best technology for the job. A data architecture that enables the implementation of many powerful rules in a connected way – without creating code complexity – leads to better outcomes. In fact, our rules-based approach delivers risk detection outcomes faster than an AI approach.

Good financial crime risk management derives from decisions that are accessible and explainable.  In this context, AI-based technologies struggle to provide an adequate explanation of why a customer or transaction might pose a compliance risk.  Poorly understood outcomes can cause compliance uncertainty, missed payment cut-offs and client friction. In contrast, using a rules-based approach with fully deterministic decisioning in an innovative data architecture provides clear risk decisions that are fully and quickly explainable. The key here is that the deterministic technology approach enables Facctum clients to trace back each decision to a rule or several rules. Firms have detailed transparency into the logic behind every decision that is taken. This is of critical importance for compliance teams today, who need to be able to support their decisions to regulators and customers.

In contrast, with AI, it can sometimes be quite difficult to unwind the thinking behind the decisions that the machine has taken. To explain AI decision-making to a customer or a regulator can prove to be even more difficult, as it can involve discussing the nature of the algorithms used and how the AI has applied those algorithms in an individual case.

Translating into Technology

Today, firms can employ Facctum rules-based data architecture to screen customers or transactions for a full spectrum of risks on an agile, accurate and highly scalable platform. Firms can align complex list management policies to operational workflow, and build, model and deploy matching and scoring rules for their specific risk profile.

True innovation is about using technology to deliver better outcomes for a specific use case. With Facctum, the innovation is in how the rules-based data architecture is constructed.

So, although “rules-based” does not sound as sexy as AI, the reality is that it meets the demands of the use case much better and delivers impressive results that stand up to scrutiny.

Is “compliance as a competitive differentiator” helpful or harmful?

“Compliance is becoming a competitive differentiator.” This phrase is growing increasingly fashionable within risktech circles today as a way of trying to articulate the value that having robust compliance technology in place can bring. It is a well-intentioned statement, but it is incorrect. Worse, its continued use could be damaging to the compliance discipline as a whole.

The most important goal of a compliance solution should be helping to ensure that the firm concerned is meeting its regulatory obligations in a particular area. For financial crime solutions, this means efficiently detecting potential and existing customers that the firm should not be doing business with. Technology cannot do the job alone, however. Globally, the compliance discipline has a long history of collaboration and information sharing to support the international mission of stamping out financial crime. None of this is, or should be, about competitive advantage.

The role of customer experience

However, providing the business with a great customer experience can be about competitive advantage. Reducing friction in onboarding, KYC and payments are critical to improving CX, which can enhance the firm’s reputation. This can both help to attract new customers and retain existing customers, making the customer experience a very powerful differentiator for financial services firms. So, the secondary goal of good financial crime software should be delivering the elements that support a great CX.

Customer experience is the competitive differentiator here, not compliance.

Why is it important to make this distinction? There is a real risk that if compliance is seen as a “competitive differentiator” among firms, the much-needed collaboration and information sharing among firms will simply dry up. This has happened in other areas in the past – the early days of the operational risk discipline saw unprecedented collaboration among firms, which ended, in part, when op risk teams began to pitch themselves as delivering competitive advantage. The discipline’s progress moved into the slow lane as collaboration collapsed.

Let us not risk the degradation of financial crime prevention collaboration in the pursuit of market share, placing the desire for growth ahead of the regulatory and ethical obligation to combat financial crime.

Best of both worlds

Firms need to work with a financial crime software solution that delivers on regulatory obligations and delivers an improved customer experience. A best-in-class core technology stack that provides low-latency, high-speed screening, and which is capable of massive scale, can process anti-money laundering checks and sanctions screening quickly and efficiently, enabling firms to onboard clients more rapidly than ever before. And because the solution is based on a much more accurate matching engine, false positives are reduced, and compliance requirements are met. ​

The right solution also enables the modelling and testing of the impact of new screening requirements, to improve speed to compliance​. This greatly reduces the risk of delays to customer onboarding when new requirements are announced, while at the same time ensuring that new screening requirements are implemented in such a way that they meet compliance demands.

The financial crime discipline needs to be careful about how it articulates the value that it delivers to organisations. Detecting and preventing financial crime – through meeting compliance obligations – should never be about competitive advantage. Instead, it should be about collaboration. However, the right software can support the compliance demands and ethical needs in the fight against financial crime, while at the same time providing a superior customer experience through robust technology delivered in the cloud.

What happens when there is no explainability? 

If the regulator or client cannot see how decisioning works, they have no reason to trust it. Or to understand these decisions against governance goals. The need is not only to be confident of the capabilities the AI confers but also the methods it uses.

This is especially important when it comes to trust, fairness, and the confidence that bias, conscious or unconscious, and false positives have been tackled appropriately. Corporate clients and consumers must have faith that they are not being discriminated against and the decisions have been based on a fair and transparent methodology. 

This feeds into risk management for the financial institution and accurately making decisions surrounding credit risk, fraud, suitability, and the like. The AI needs to show how it identified the needle in the haystack and what made it different from all the other needles. It then should be able to learn from that to correctly identify other such needles that might come up in the future. Pertinently it also needs to know when it does not know and ask for help to again learn from the past going forward.

Those using the output of AI also need to know how it works. If analysts do not understand how a decision was reached, they are less able to make informed decisions. Institutions, having invested heavily into AI, want to see a return. That probably means using AI-powered automation to free up humans to do value-added tasks. In reducing false positives, for example, the output for humans to investigate needs to be finely tuned and accurate – thus removing a workload than would previously have been done at least partially manually.

It also means that the data scientists can be freed up from interpreting outputs to instead refine and hone algorithms and processes and even work to build out the application of AI into other business streams. This shift of focus is typically results in more interesting science – and therefore helps with the employment retention of highly skilled data scientists. 

As trust is important to consumers, so it is too for organisations. Having clarity over why a decision has been made and being able to spot when a decision is correct but the reasoning is flawed is one element. Taking steps to resolve the issue promotes transparency and trust and a sense of control too. It means the ‘bad decisioning behaviour’ can be corrected, and the system learns not to do it again. 

All of this encourages AI adoption and acceptance; trust through understanding through transparency is key for uptake. 

Achieving effective speed-to-compliance

A critical objective of a financial crime risk management strategy is to ensure a rapid and agile response to new or evolving AML-CTF risks. Screening programmes that respond quickly and effectively to new compliance expectations not only reduce institutional risk exposure: they can also ensure that sanctions targets have less time to move or hide assets. 

However, achieving speed-to-compliance that is effective and sustainable operationally is increasing difficult. These challenges have increased in the contemporary context of complex, expansive and high velocity international sanctions following the invasion of Ukraine by Russia.  

In addition to the continuous assessment of threats and regulatory requirements, a strategic FCRM plan must also consider the capacity and utility of operations and technology resources. Components of an AML-CTF control framework that are critical in delivering speed-to-compliance include: 

List Management

Ensuring that all the sanctions lists required by an institution are available in screening programmes has become an increasing complex task. Unique list requirements are set by many different jurisdictions and by multiple government agencies within those jurisdictions; and these agencies can publish multiple lists. Operational complexity is increased by multiple types of data formats and targets. At the same time, a higher update velocity must also be managed to ensure that all required list data is available for continuous screening. Managing these tasks requires list management software that can be configured quickly in response to increases or changes in any of these variables and that there is adequate capacity to address future demands.  

Data quality assurance

Whilst there are laudable initiatives to standardise the formats of sanctions lists, it remains the case that an institution is likely to source sanctions data in many standards or formats. Once retrieved, these disparate data sets are typically primed for screening using various data management processes. Ensuring the speed, accuracy, consistency, and resilience of these processes is critical. As a result, additional investments are being made in data management and governance capabilities to improve data configurability, real-time reporting, and operational analytics – with the objective of ensuring the timely response of screening technology to compliance policy objectives. 

Screening speed and scalability

The expectation that screening technology will respond promptly to new compliance requirements is accompanied with the assumption that compliance goals will be met without compromise to search accuracy or performance. However. adjusting to increasing operational load and heightened expectations, whilst maintaining, if not improving, speed, is increasing difficult if incumbent screening technology is constrained by technology debt. This requires institutions to reassess whether screening operations can continue to deliver sustained operational excellence – and for how long – or if newer technology can provide improved speed, risk detection and scalability to increasing data volumes. 

Tuning and testing

Increases in the range of direct and in-direct sanctions targets have the potential to slow down compliance processes, especially if screening systems are not supported with additional resources. The traditional short-term response of adding headcount to operational capacity might still be valid but investment in AI-led automation is increasingly preferred. However, either approach can be made more effective and efficient by more focus on the upstream modelling and testing of the impacts of new screening requirements. To avoid a compliance gap, this testing should take place in real-time, without incurring an operational penalty. 

In summary, the pace and intensity of today’s regulatory compliance requirement is placing additional demands on screening operations. Technology innovation that delivers low-latency processing and capabilities for handling massive data can mitigate these challenges whilst increasing speed-to-compliance and providing additional capacity for future needs. 

The next post in the series will consider the challenges of improving the compliance effectiveness of screening and how technology can lead the response.