Why good chatbots need context, not tree-based flows

Source Node: 1352945

In the example, you’re interested in visiting an attraction site and want to find out how much the entrance tickets are, so you ask,

Surprisingly, the chatbot didn’t know the answer, despite having the relevant API integrations.

With a bit of guidance, the chatbot redirects you to a guided (rule-based) conversation flow. It suggests that you should say “Buy tickets” first, followed by “Ticket prices”, and finally “Cloud Forest” to get to the answer.

Not quite close yet.

The vast majority of virtual agents use a natural language understanding (NLU) model, but users are still stunted with the unnatural dialogues.

One cannot simply explain the intelligence of a chatbot by saying that one NLP platform is better or worse than the other. It is a convenient reason, but it is not in this case. Why? The purpose of a well-trained NLU model is to help map an input (user utterance) to an output (user intent). For example, both “Send curry chicken pizza to 20 Sunshine Avenue” and “I want fish and chips” refer to the same “Food Order” intent.

However, that is where the intent detection ends. As a conversation designer or developer, you need to consider what happens after intent detection. It’s called context to give a direct response as much as possible.

In real life, if you and your friend finally meet up after months of lockdown, all the moments in the last trip that both of you remember shapes the context. It has specific parameters such as the city names and the people you meet along the way. Context is also perishable, which means that the pre-COVID holiday moments aren’t the first thing in mind if you and your friend have met up multiple times talking about other things.

When you’re programming chatbots, you may want to do something with the specific information uttered by the user. For example, a good idea for your virtual agent is to proactively extract the food name and delivery address during the conversation session and commit to a memory state (the context). The bot should not ask for the same information when the user has already said them down the path.

Unfortunately, some chatbots today can’t remember essential parameters to hold a helpful dialogue with the user, who will eventually have to repeat critical details to the chatbot to help it along.

These are some possibilities:

  1. Designing happy paths only under tree-like conversation design tools in some low-code software
  2. Treating intents as turns or checkpoints in the flow, rather than goals the customer have in mind
  3. Presenting conversation mind maps or flowcharts to software engineers with no specifications about user error corrections and chat detours
  4. Having difficulty accounting for large permutations in a non-linear application, unlike a web or mobile app with finite flows to success/failure states

This time, the chatbot extracts the entities it looks for in a ticket price inquiry intent. Those are the participants and the attraction site. As there are sufficient data to look up ticket prices, the chatbot presents a couple of relevant rich cards.

Supposedly you made a mistake. You correct the error by saying

Instead of a fallback (“Sorry, I didn’t understand”), the message leads to a parameter-based intent. The chatbot has already remembered your preferred attraction site and now only accounts for the new participant information. It also knows that you’re in the state of ticket price inquiry, so without requiring you to repeat, it tells you the new total price.

You continue to mention that you’re a local citizen.

Again, without having you repeat the attraction site and the number of people and changing the current conversation topic, the chatbot looks up ticket prices based on all the updated information gathered. Success!

Source: https://chatbotslife.com/why-good-chatbots-need-context-not-tree-based-flows-f083db0ed635?source=rss—-a49517e4c30b—4

Time Stamp:

More from Chatbots Life