Consider launching a chatbot in 2017? Challenge the hype!

For the past months, few industries have been riding the Artificial Intelligence bullet like financial services. Whether it’s Wall Street or High Street – most of the big names in banking have launched various attempts at harvesting the promises of deep learning, language processing or reasoning algorithms. Some with recognizable success stories in the likes of automating legal work or quantitative trading, others overselling the introduction of merely rule-based systems like robo-advisors or process automation as machine intelligence.

Huge expectations

As of today, there are hundreds of vendors and consultants selling AI into financial services. More and more Fintech players also claim to use some form of Machine Learning, seen as a quality stamp helping to sell their applications into the financial industry. While this trend ups the pressure to rethink the value proposition of many products and services, it adds a whole new level of complexity and lock-in risk for traditional banks. Given the immaturity of many vendor solutions, they will almost exclusively rely on heavy training with banks’ data. What’s also seldom mentioned is that AI solutions are far from finished products, with a long path to readiness for integration and deployment in a large enterprise context. Moreover, there is a noticeable push of vendors that traditionally dealt with banks’ IT departments towards marketing their tools directly into the front office. Selling whatever buzzword gets their attention may make bankers fall in love with AI tools and speed up the their traditionally slow buying cycle. But buying technology for the sake of having technology typically won’t do the trick. Many business functions tend to start searching reasons to implement a certain tool; often without a clear concept of which client problem to solve, nor sufficient judgment of the effort needed to train algorithms or integrate a tool into existing IT architecture.

There is one theme that banks seem to have unofficially declared their favourite AI application: Chatbots. From San Francisco to New York, from London to Oslo and from Singapore to Shanghai – there are already various implementations of text-based chatbots answering client questions to more ambitious virtual assistants executing tasks like transferring money or scheduling advisor meetings. Add to that the first applications for devices like Alexa or Google Home, an even more challenging discipline given restriction to voice control plus unresolved data secrecy and authentication issues from their heavy reliance on cloud technology.

First learning curve

What most conversational agents have in common however is that their current user experience is mediocre at most. The vast majority are nothing more than dumb Q&A bots. Yes, Natural Language Processing is still the most challenging discipline in AI. And yes, users do give you a novelty bonus for the time being – after all we are still in the age of narrow AI. Currently most bots are capable of little more than linear, single-turn conversations. Many struggle with contextual background, let alone switching context during conversations. Navigating between content levels or understanding the status of a request is difficult. So is building shared context, which would make for a true dialog. With the memory of a certain Disney fish, and often helpless at facing sarcasm or fragments of sentences and words, today’s bots are far from enabling natural conversations. Numerous banks find themselves having to ramp up expert resources that spend their days scripting ever new contents into digestible answers. Many are genuinely surprised at the amount of training data needed to feed a bot with domain knowledge, the effort of getting even a single user intent right, and the lower-than-expected rates of correct intent detection. Add to this the challenge of generative replies and inferring new facts from user content, and it’s plain to see why many first generation chatbots have been shut down after only weeks in operation or trial. Humans have a habit of asking complicated questions, and humans tend to be annoyed quickly.

While bots hold the promise of easier, increased and more seamless interactions with clients, it will only be kept if the bank actually solves their most pressing needs. Don’t get me wrong, I’m all for innovation in financial services. But within reason. We are near the peak of inflated expectations and many banks seem unconscious of the deep trough likely to follow. It’s easy to fall victim to a hype, but when your own tech maturity speaks for starting with easier machine learning on structured data, it’s less smart to attempt automated client conversations first. It is essential to think through processes to the end – a conversation ending with a forced branch visit or waiting for physical mail will still be considered broken.

Challenges

Multi-turn, multi-intent, multi-language, natural conversations are currently wishful thinking and still a thought for tomorrow. In the meantime, it’s worth considering whether the time is ripe for facing clients with automated chats today. This cannot be taken lightly. It is essential to gain experience with user behaviour and establish a viable strategy on how to tackle conversational commerce. Determine preferred channels, interfaces and ways to structure your data sources. Select your vendor carefully and get a reference from its existing clients. Don’t outsource this decision or overload yourself with unrealistic ambitions or complexity from the beginning. Give the bot a frame on what it can say and what statements may be problematic due to their legally binding nature. Start trials with internal users and work your way towards clients. Define minimum thresholds for quality KPIs and measure them. Learn to deal with emotional responsiveness and what makes for a convenient conversation. Be transparent about the fact that users talk to a machine, make clear what it can and cannot do. Give your bot a recognizable, likeable, but neutral persona. Think through how to deal with data secrecy. Determine below what probability of generating the right reply the conversation is handed off to humans, and don’t forget to learn from your service centre’s written replies. Run analytics on conversations and monitor how users’ needs and behaviours change.

As plain as it seems, an industry built on trust cannot afford to jeopardize user centricity.

Image Source