Do not raise exaggerated hopes.
Chatbots have had to take a lot of abuse lately. They were too stupid, everybody said. Yes, that’s just the point, as long as artificial intelligence is not really available to us.
So, the question is, how do we deal with it? And my answer is: Do not raise expectations that you cannot fulfill. Develop a chatbot for clearly specified use cases. Similarly, also make it clear in the communication with the user from the beginning that the chatbot will only serve a specific purpose – and it is not an artificial person with whom one can act freely. Implement a good onboarding process that outlines the know-how of the chatbot and provides examples of questions that the chatbot can answer. This way you avoid much frustration on the user side.
An example: The chatbot of the German Bundestag has a free-form text entry field and it therefore gives the impression of being a smart fox. But, in fact, it can only give meaningful answers to simply formulated queries. Here, a little more modesty in the performance would help to avoid disappointments.
Develop a clear use case.
Chatbots are not meant to be an end unto themselves even if they are planned as a marketing gag.
Nevertheless, it is worthwhile to visit the somewhat dated chatbot Cleverbot of the British AI researcher Rollo Carpenter and to speak with him in English. Cleverbot has no special purpose. He learns based on all the conversations he has had and can therefore have very realistic conversations. Eviebot and Boibot are two extensions that speak to the user with the help of avatars and even master basic facial expressions. Still fairly rough, but they give an idea of what avatar assistants can look like.
But: These chatbots do not help a customer any further. As soon as one wants to know something concrete, they disappoint. Try it yourself: even wiser general chatbots such as Mitsuku almost demand that you check their “humanity” with questions – and in the end you are disappointed, because the AI is acting stupidly or often just evades.
JOBmehappy, on the other hand, will not disappoint anybody: this chatbot provides job offers, nothing else, and it doesn’t want to do anything else.
Care for a virtual home button.
Recently, I was on the website of a chatbot, who can help with purchasing clothes. I quickly landed in the lingerie department – but was looking for something quite different. Regardless of what I wanted, after each input I was only shown more variations and colors of underwear.
What was missing was a simple “restart” button. Some chatbots do indeed offer one. Others recognize a dozen phrases, which can be understood as a corresponding call for help by the user. No chatbot should do without this function.
I also find a “back” function very important: If a customer has negotiated through a dialogue and has gotten to the product they are looking for and then the user asks a question whose answer is no longer satisfactory to him. Then they would like to somehow “take a step back” – without having to restart the entire query.
Also make partial information modifiable.
When booking a flight, there is typically an outbound and a return flight. Chatbots can often recognize both in one sentence. What is usually missing is to be able to change a part of it afterwards, for example, the day of the return flight or the departure time.
So, make sure that the chatbot can build contextual knowledge, with which the individual data can also be changed independently. Otherwise, the chatbot acts stupid and disappoints the customer. Actually, every part of a booking or purchase order must be modifiable at any time using language input.
Do not link out all the time.
Of course, companies that might be hoping for more traffic from their chat sites would like to keep users on their website. At Gymi, with the Gymondo Bot, you can see what this looks like: The chat appears to be mainly a vehicle, which is used primarily to lure you to content on the website.
But now the user is actually in a chat. He wants to chat, not open web pages. He wants to get the answers in the chat and not have to read them on a website.
I am not saying that all the answers must be given in the context of the messenger. Therefore, consider exactly when and for what reason you send users via a link to a website.
Think in small snacks.
If someone has a problem with their smartphone, for example, and enters the question into Google search, they usually end up on ultra-long websites. Somewhere buried in the depths of the text, an answer is found in the best-case scenario. That’s not how you should think about chatbots.
A good example is the news app Resi from Berlin, which is available for iOS and in a version for mobile browsers. Resi pretends to be a chat partner who tells you about current news from all over the world. The user cannot ask natural-language questions at this time. However, the user usually gets offered several possibilities of reaction in the form of buttons. On mobile devices, this is even more useful than natural language input.
The most important reaction options are either to express no interest – or to get details. After the initial details, the user can then either turn to other topics or query further background information about the message.
The snack-size mediation of messages is somewhat naive and uneducational. On the other hand, Resi is a very simple way to really address the needs of the reader and it is very personal. This feature is, however, still associated with a high manual editorial effort. Novi offers something like this on Facebook. For English news, Quartz is interesting here.
Ask, but correct.
Information can be ambiguous. Product names may be similar, cities may have the same names in different countries, and so on. The paralegal RATIS, which specializes in delays by plane, asks about each city name to verify whether “Berlin, Germany” is indeed meant. In principle, this is useful, but it quickly becomes annoying.
Lufthansa Best Price hides its requests by highlighting the place names with a link – if you click on the link you can see on Google Maps whether it is really the right city – without an annoying request.
Do not ask for any information that you do not use.
Chatbots collect a lot of information and, in this way, can also prepare purchases or bookings. However, the UX fail occurs when the customer clicks the link and finds that the shop or the website has not been given any of its inputs. The user gave lots of information in the chat and now feels that they are no longer taken seriously in the shop because they have to repeat their input.
I guarantee: they will avoid the chatbot and the shop in the future. An absolute no-go.
Do not develop a garble chatbot.
Poncho appeared to me as a pretty clever guy. The chatbot provides a weather prediction upon request in the morning and in the evening, which is actually quite nice. Unfortunately, Poncho thinks he must make these announcements each time with a joke that is much longer than the announcement itself.
Another example: Congstar chatbot Sophie responds very sensibly to the complaint that the SIM card is broken. She does this only after more than 200 characters with generalities, which have nothing to do with the actual inquiry. Why?
Check if your chatbot is repeating.
Chatbots are stupid, yes. So, the question arises as to how we deal with this. A user who always gets the identical answers to the information provided will find the chatbot particularly stupid – and thus just a waste of time.
So, make sure to include logic that checks whether the chatbot always gives the same answer for whatever reasons. Ensure that alternative offers are provided to the user, or that a restart of the conversation is offered.
Do not pretend to be a human being.
I also observe that many chatbots are given a cute name and are also graphically represented by an appealing avatar.
The chatbot of chatShopper is called Emma, Microsoft’s Chatbot Zo.ai looks like a pretty redhead. There is little to say against this – and yet you should make sure that the Chatbot does not pretend that he or she is a human being.
Congstar’s Sophie iis sympathetic, but she makes it clear that she is virtual. Why is that important? Because users find this out sooner or later. And then they feel deceived. Because with a human chat partner you are just different than you are with a machine.
A chatbot can have a personality. This should be adapted to the image of the company, the desire of the users and the task of the chatbot.
The personality of a chatbot cannot develop out of its actions. But only from the graphical presentation and, of course, from their language: Does it speak formally or informally to you? Is it friendly or serious, does it treat us like children or adults?
The health coach Lark analyzes our movement data on a smartphone. Then it makes us realize that we have moved too little while we ate too much of the wrong things. To communicate this without insulting the user is not easy.
Not without reason, I thus observe a certain tendency to joking chatbots. But do they really fit any product? – This should be well thought through in advance.
Good UX for chatbots-poster
Read more about why in my opinion useful chatbots currently do not need real artificial intelligence (AI): “I would like to talk to your chatbot!”