Cloud-first! From on-prem!

Facctum inaugurated its Bangalore office on 27th July. We also used the opportunity to bring everyone in Facctum to this brand-new office for an entire week. We spent the week experiencing the very thoughtfully designed facilities of the new office. The sense of belonging was palpable almost as soon as we walked in. Until this inauguration, the Facctum team – now almost touching 30, hadn’t ever gathered in one physical place let alone spend a full week together. What an amazing week it was!! We had icebreakers, two separate hackathons, a session each on culture / values, ways of working, Product Roadmap, Path to Revenue, Customer Experience, and team-building activities including a hugely exciting race around Bangalore on the same theme as the Amazing Race, solving riddles in Escape Rooms, and dinner/drinks every night to talk shop and have fun at the same time.

Everyone was not here just physically but fully present mentally and emotionally. The time we spent together will forever be cherished. It brought all of us closer and finally the pieces started to all fit in and we all felt very energized. Post the week-long fun-and-tech-fest, the velocity has gone up, and collaboration has increased significantly. A simple proof-point of this was our All Hands call last week during which, everyone switched on their cameras with most proudly wearing Facctum T-shirts. The smiles, confidence, clarity, and camaraderie were on display like never before. The entire team felt 200% more engaged and connected.

The office itself was built over the previous 3 months at frantic pace – architecture, design, and construction elements that all needed to come together – a lot of moving parts all at the same time –very different from a large corporate throwing money and resources at such challenges. For us, we needed to keep costs as low as practical which meant getting personally involved in designing every small detail, to build a modern and efficient place to collaborate. The entire project was done in a true start-up style. We set a milestone – half of it was based on plans and estimates and the other half just on whim. We went all out to achieve the milestone as you do in a start-up and, while we could not achieve our milestone fully on time, we came very close to it as we overran by only 2-3 weeks – expected in any start-up.

We created wonderful moments throughout the week. In 3-5 years, we shall all look back at this week and know it was a defining moment.

Facctum is currently spread across three locations – Pune, Bangalore, and London. These are all long-term strategic locations for us – hence, we intend to build our own offices in Pune and London in time to come as we grow our employee and client base. Until then, we will continue to operate from co-working offices in Pune and London. Here is hoping that I am writing another blog in about a year’s time on our new office opening in Pune.

Making AI Explainability better – what could/should things look like?

There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework. 

Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.  

But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.  

For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases. 

In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints.  This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech.  With white box, the purchaser has the blue prints. 

Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one! 

Posted in AI

What happens when there is no explainability? 

If the regulator or client cannot see how decisioning works, they have no reason to trust it. Or to understand these decisions against governance goals. The need is not only to be confident of the capabilities the AI confers but also the methods it uses.

This is especially important when it comes to trust, fairness, and the confidence that bias, conscious or unconscious, and false positives have been tackled appropriately. Corporate clients and consumers must have faith that they are not being discriminated against and the decisions have been based on a fair and transparent methodology. 

This feeds into risk management for the financial institution and accurately making decisions surrounding credit risk, fraud, suitability, and the like. The AI needs to show how it identified the needle in the haystack and what made it different from all the other needles. It then should be able to learn from that to correctly identify other such needles that might come up in the future. Pertinently it also needs to know when it does not know and ask for help to again learn from the past going forward.

Those using the output of AI also need to know how it works. If analysts do not understand how a decision was reached, they are less able to make informed decisions. Institutions, having invested heavily into AI, want to see a return. That probably means using AI-powered automation to free up humans to do value-added tasks. In reducing false positives, for example, the output for humans to investigate needs to be finely tuned and accurate – thus removing a workload than would previously have been done at least partially manually.

It also means that the data scientists can be freed up from interpreting outputs to instead refine and hone algorithms and processes and even work to build out the application of AI into other business streams. This shift of focus is typically results in more interesting science – and therefore helps with the employment retention of highly skilled data scientists. 

As trust is important to consumers, so it is too for organisations. Having clarity over why a decision has been made and being able to spot when a decision is correct but the reasoning is flawed is one element. Taking steps to resolve the issue promotes transparency and trust and a sense of control too. It means the ‘bad decisioning behaviour’ can be corrected, and the system learns not to do it again. 

All of this encourages AI adoption and acceptance; trust through understanding through transparency is key for uptake. 

What is explainability, and why is it important?

Explainability, on a basic level, is showing how an algorithm reached the decision or conclusion that it did. It is not dissimilar to trying to work out the ingredients of a dish without knowing the recipe.  You probably know what’s in it but you can’t be certain.  

If AI is being used to make decisions over, say, financial crime or credit risk, or suitability, then transparency is key to avoid bias, unconscious or conscious, and to satisfy the client and regulator that the end output is appropriate and just. 

Until recently, this transparency issue had been largely passed over. Risk technology vendors were unwilling to reveal their proprietary algorithms, and interactions between different layers of decisioning were complex and challenging to pull out. And in any case, the uses and reach of AI were limited. 

This is no longer the case. While AI is still in its early to middle stages of adoption, the pace of take up is rapidly accelerating is on the up and fast becoming the norm. According to Gartner, by 2025, 30% of government and large enterprise contracts for the purchase of AI products and services will require the use of explainable and ethical AI.  

Indeed, some of today’s AI tools can be highly complex, if not outright opaque. Workings of complex statistical pattern recognition algorithms, for example, can become too difficult to interpret and so-called ‘black box’ models can be too complicated for even expert users to understand fully. 

But if the output of AI cannot be explained and justified, then there is a transparency issue. This casts doubt over whether the modelling used is reliable, based on up-to-date data, free of bias, accurate, and therefore fit for purpose. This does not work in an industry that is tightly regulated, where firms need to show the process or path, they have followed to the regulator to achieve compliance and generally operate with an ethos of always acting proportionately, in the context of risk, which doesn’t always means doing exactly what the client wants or expects, and doing so transparently. 

The cost of performing manual tasks and processing large volumes of data is high, and humans making decisions based on inference rather than fact is hardly suitable in today’s world. The adage that the same set of circumstances would receive a different choice on a Tuesday morning to a Friday afternoon is not entirely untrue. But the potential cost of getting a suitability or fraud decision wrong due to an algorithm being hard to read is much, much higher. 

Regulators are taking notice of this and looking not just at the possibilities afforded by AI but also whether they are positive or not. Coming to the fore is the thinking that automation and AI is only valuable if it is transparent and explainable. 

Regulators worldwide are increasingly vocal about the importance of understanding why the algorithm reached a particular conclusion.  The UK’s Financial Conduct Authority (FCA) defines AI transparency as ‘stakeholders having access to relevant information about a given AI system.’ 

Regulators need to have faith in financial institutions, understand the approach the mechanics and be confident that the process is ethical, fair, and ultimately provable.  

The same issue applies as regards clients. Clients will happily take their business elsewhere without trust and faith in financial institutions. And if there is no confidence on a systemic basis with AI as a principle, being rejected based on lack of transparency, then further industry take up will be poor. This is bad news for something that gets better and better the more data it has and the more it evolves and progresses. 

Ultimately then, if AI in all its guises is to be acceptable to regulators and clients and therefore be viable to use, then it needs to be transparent and explainable.  

Audere est facere

The early months are critical for any start-up, and they certainly are for Facctum too. We have been thinking hard about the kind of culture we want at Facctum. In simple words, we are striving for a ‘connected’ culture. Our colleagues, customers, and partners to be always connected. Many exciting steps are being taken to achieve this – none more exciting than the highly talented colleagues already on board and several more in the pipeline, and another equally exciting project is our in-progress office at Vaswani Augusta @ Embassy Golf Links Road in Bengaluru. 

We have had multiple debates on the pros and cons of getting our own office. Do we really need it? In the end, we decided in favour of it because we believe that an office is the ideal setting to learn, contribute, challenge, and produce. Collaboration at all levels is key to building connected culture. We want to motivate rather than mandate. So now we are working hard to create a safe environment where our colleagues can enjoy their work, collaborate with one another, and achieve their objectives as well as those of Facctum. Our office represents a force for unity, stability. positivity, direction, and celebration. We, at Facctum, are passionate about real interactions – we want debates, discussions, and then decisions – all to enable our colleagues as well as Facctum to achieve our full potential. We want to work hard for our customers – we will go to any length for customer success. The design of our office is being done to create a modern setup with a clean and cosy experience. As employees seek greater connection with fellow colleagues and aspire to be part of a cohesive team, we believe that creating more opportunities to engage in team settings, especially for our younger colleagues, will accelerate learning and development for all colleagues at Facctum. When the day is not going well, an office setting can also be immensely helpful for emotional bonding. We will surely have a few of those days! 

We are looking forward to move-in by July 1st – hoping to see our colleagues forge friendships, build relationships, and we invite you to come and experience Facctum at its best at our new Bengaluru location. We have colleagues in Pune and London locations, and these are and will remain strategic locations for Facctum. While we aren’t yet getting our own offices in these locations, we are still as committed to providing an office environment for our colleagues from co-working locations. After all, it is not about the office; it is all about belonging. 

We look forward to welcoming customers, prospects, partners, and of course our current and future Facctum colleagues.