How to build a chatbot rich in functionality
Editor's note: The following is a guest post from Ilker Koksal, CEO and co-founder of Botanalytics.
When it comes to chatbots, there's a big focus on designing for an optimal user experience. While things like accessibility, personality and language are important, they're only half the story, as having a friendly bot means little if it doesn't actually work. When building a bot, taking care of essential functions like error response, vocabulary and speed of response is just as important.
Why? A chatbot testing strategy focused on its functions is a first step to assessing user experience. By finding and fixing the biggest bottlenecks your user might encounter, you're ensuring they'll have a good experience with the bot and will be more likely to come back for more. Here's how to test chatbots and what to look for when assessing their functionality.
Increase chatbot functionality with NLP
The primary example of what makes a good chatbot is simple: understanding what the user's saying. Natural language processing (NLP) is essential to helping your chatbot understand meaning and sentiment behind the words a user says. In your chatbot testing strategy, train your bot to expand its vocabulary and understand phrases your users will likely throw at it. Tools like API.AI, IBM Watson and Microsoft Bot Framework can help make your chatbot more lifelike and increase its understanding.
Aside from meaning, sentiment analysis is incredibly important if your chatbot functionality includes customer service, shopping or other customer-centric tasks. If a user's anger or frustration is reflected in their language, for example, it can prioritize that user over others and make an immediate transfer to a live agent. By understanding users' feelings and sentiment, a bot can respond more like an empathetic human.
Work on error responses
Despite your NLP efforts, there will be moments where your bot doesn't understand something. There's nothing more awkward for users than encountering an error when it's obvious they're interacting with a bot. For users who doesn't know what to say, or why their response wasn't recognized, this is bad news — and can push them to leave the conversation altogether.
Your chatbot test plan should vet error responses and ensure they keep conversations on track. For example, instead of using a "I didn't get that" type of message, provide suggestions on how the user can re-word their query using conversational analytics tools. By providing conversational pathways for the user when problems arise, you can teach the user about chatbot functionality and keep the conversation going.
Speed is what makes a good chatbot
Speed of response is an easily overlooked issue that should be part of any chatbot test plan. Some botmakers intentionally delay chatbot response to make their bots seem more human — taking time to type out a response — but waiting is a point of frustration for users. In fact, what makes conversational UI great is its immediacy. If a chatbot is slow to perform a task like responding or completing a task to meet the user's goal, the user might stop talking altogether and never come back.
Many consumers aren't quite accustomed to talking to chatbots without direction. Less tech-savvy users might worry that a chatbot taking minutes to respond will never respond at all. The question of whether or how long to wait is something that many users won't want to deal with.
That said, sometimes a wait is unavoidable. If your bot needs to take time to properly process a query, it should let the user know upfront that a brief wait time is necessary. Managing user expectations throughout the conversation is part of what makes a good chatbot, so always be honest about wait times so you don't frustrate the user.
Let users review and edit responses
Have you ever gotten stuck in a loop when talking to a chatbot? It's a common occurrence that's a nightmare to get out of. Maybe the bot is tripped up in an error, or maybe you need to go back and change an option. Whether the user or the bot is to blame for such a misstep, it's vital that you give the user the option to go back a few steps and revise or start over from the main menu.
Your chatbot test plan should check whether there's potential for looping, and provide functions to allow the user to solve and correct an issue that occurs. Coupled with the tip on error responses above, you can provide options to lead the user to a point earlier in the conversation and ensure they don't get stuck again.