Financial Services CX Insights: 4 winning chatbot deployment strategies

Article content

Five years ago, banks cautiously deployed bots for a very narrow set of tasks like basic FAQs, but most customer inquiries were still directed to live agents. Banks today have bigger ambitions with bots—the technology has improved and there is an increased demand from customers to self-serve. An enterprise-wide program that supercharges automation and uses natural language understanding (NLU) and bots, could take years for many financial institutions to fully realise. But with the right technology, the wait might not be as long.

Here are four key issues to be aware of to create a successful bot deployment within the financial services industry (FSI).

1.  A Mountain of Intents

Unlike other verticals, the banking industry deals with a wide variety and volume of possible client intents as there are multiple lines of business within the industry. A financial institution typically performs numerous tasks around retail banking, chequing accounts, card services, mortgages, wealth management, and more.

Many technology vendors specialising in financial services have built ready-to-use libraries of intents and utterances. These vendors might have pre-built integrations into the backend systems that banks use, but this doesn’t mean engaging with a vendor that specialises in financial service bots will solve all potential problems.

Another factor to consider is multi-lingual client support (especially in larger organisations). A larger organisation may support clients in many languages, using a non-universal, banking-specific vernacular. Proper bot tuning, quality assurance, and usability testing remain critical to success. Rushing crucial training and quality assurance stages will create bots that deliver a disjointed client experience.

Financial services is a very rich and complex domain, comprising many business lines and a wide range of distinct products. The more scope you add, the more intents you need—and the more intents you add, the more data you need for each intent. For example, to adequately support hundreds of intents in the retail banking domain, each intent would need to be trained with an average of 2,000 human-labelled utterances, often significantly more (over 10,000) for complex parts of the language model. Weak model coverage or insufficient training data means the bot is less likely to be able to answer the user’s specific query.

2. Bot Fulfillment and FSI Backend Systems

Identifying intents and filling slots is only half the battle; sometimes it’s the easiest half, with the complication arising when it comes to intent fulfillment.

Imagine your customer wants to transfer funds from their chequing account to their savings account. Identifying that the customer wants to move a specific amount of money and all the details surrounding this transfer satisfies the challenges of intent recognition and slot-filling. But that’s not where the bot task ends—it must also fulfil that request. This means integration with FSI backend systems is necessary to execute the intent, and a key challenge is ensuring each of those system interfaces adhere to security compliance and communication protocols.

Simplifying the process can solve this. By consolidating the interfaces with many backend systems, or working with one company that does it for you, your bot might not need to talk to 10 different systems.

3. Data Security, Privacy, and Compliance

Although many start their journey into FAQ bots or concierge bots, the real challenge comes with transactional bots because of higher standards for controls. And when it comes to developing these transactional bots, there’s no quick fix.

The first step is to divide all intents into two buckets: those that require identification and verification (ID/V) and those that don’t. This clarifies the security profile that you need to follow within your bot ecosystem.

The second step is to categorise the types of information that will fill slots or that bots will deliver. Determine what’s required from the perspective of PCI compliance and data-privacy regulations.

Finally, the third step is to identify all the systems that will be leveraged for intent fulfillment—and then categorise each of these from a security/risk perspective.

Gathering this information ahead of time will prepare you for the necessary discussions with your security and compliance teams. You want your security and compliance team to be a partner in this endeavour as well as the ultimate authority on what you can allow. Do your homework; start the consultation process with these teams before developing very ambitious plans.

4. Model Risk Governance

Banks use complex models and data science within their business operations. Model risk governance grew out of this when banks first started to use artificial intelligence (AI) algorithms for risk assessment, such as whether to grant a customer’s loan application. If the algorithms didn’t work properly, it would have unintended consequences and would put banks in jeopardy.

Now, model risk governance is a formal process with strict gate control. When an AI algorithm is being proposed— any type of AI or machine learning algorithm — banks must go through this very detailed and lengthy process to protect their institution and interests.

Suppose you have a project in mind you want to complete in six months. Talk to your in-house model risk governance team as early as possible and ask them how long their process will take, then build this into your project timeline.

While model risk governance wasn’t originally intended to be about bots, they’re now involved with all services that use AI models. If a process or system is making calculations, forecasting, or analysing, it could be classified as a model. In the end, you’re responsible for navigating this process and having all required information available for the model risk governance team. This information can include tuning, testing and QA processes, data source information, and data cleansing processes, as well as detailed processes on how to eliminate the risk of unfair or unethical bias within the model.

The Road to Bot Sophistication

Considering all these challenges, you might wonder where to begin and how to move toward a sophisticated end-state bot deployment.

A quality bot experience is dependent on both understanding what the user wants to do and being able to complete that task. It doesn’t matter how good your AI is at understanding what users are saying if your answers are poorly designed or the bots are not integrated with backend systems to complete tasks.

Start with use cases you know will work and deliver value. Chatbots and virtual assistants are deployed at scale and delivering value at major banks today in their consumer banking units. Bank of America’s Erica Virtual Assistant, for example, has 21 million users and usage grew over 60% last year. Customers use Erica because it’s a faster, more convenient way to do things on mobile. This consumer use case is a great place to start. If you don’t want to dive straight into deploying a bot into digital banking, perhaps start on the dotcom and expand into authenticated channels over time.

Working with a technology partner who brings a pre-trained banking language model to the table will save time and resources on AI training, get you to market faster, and lower the overall risk of the project. However, you still need to ensure the bot can answer user queries and complete their tasks.

Start with the low-hanging fruit use cases with a high volume of routine, low complexity queries, and tasks. You can generally find these in your retail/consumer business units. Pick high-volume use cases as it’s a strong indication you’re solving a real problem, and you’ll generate sufficient data to train and optimise your AI model quickly.

The good news is that there’s a lot of low-hanging fruit. One look at what customers are calling your call centre about will give you a clear indication of where to start. Alternatively, start with low-risk use cases, such as from your internal HR or IT help desk. These sorts of knowledge management bots are akin to a search engine with a conversational interface and can be very effective, easier to build and train, and faster to deploy. However, the solution you build for this use case and your experience training it will not transfer over to more complex, mass consumer use cases.

Finally, keep humans in the loop. Bots help users solve routine, repetitive problems, which comprise the bulk of what customers want to do in digital banking, and are the queries monopolising your support team’s time. But bots will never work perfectly all the time. There are many higher-value tasks, such as transaction disputes and estate issues that are best solved by your support team. Having the ability for the bot to seamlessly hand a user over to a live agent gives you the best of both automation and the human touch.

A practical guide to mastering chatbots

Learn how to master the basics of bots and serve customers with empathy, at scale with our Practical Guide to Mastering Bots.

Click here to download