Chatbot Performance Analytics
Building a chatbot is only the first step in making it useful. Use NativeChat’s built-in analytics dashboard to monitor your bot performance, look for areas to optimize, and fix broken conversation flows.
To check your bot’s performance, go to the Analytics tab. Once there you can adjust the period for which you want to see data and the aggregation type - weekly or daily. The default time period is 6 weeks back and the default aggregation type is weekly. The data is not showing the current (unfinished) week, but only the complete weeks within your selected period.
Engagement Metrics
Learn how many people are using your bot and how successful they are.
All vs. successful conversations
The bars show the weekly/daily number of conversations started vs. completed. This gives you an idea of how many people use your bot and how many of them are being able to complete their tasks. If the numbers are lower than expected, you should think of a better way to lead people to interact with your bot.
The black line is the trend of the ratio between the two numbers. Ideally, it should be as high as possible because we would like everybody who starts a conversation with the bot to finish it successfully. When you make changes to the bot, make sure you come back and check this trend to see if they have a positive (or negative) impact.
Success Ratio
This chart shows the relative proportion of the successful conversations to the abandoned ones and the ones handed off to a human operator.
The orange operator area on the chart is shown when you have a conversation with a stay-silent
command in it. Usually, it is used when you hand off the conversation to a live person to respond to user messages. This chart will show in how many cases the bot was not able to handle the task on its own and the user had to request to contact a human instead.
Success Count
This chart is basically the same as the Success Ratio one but with absolute numbers instead of percentage proportions.
Retention
Shows how many of the users continue to use the bot week after week (or day after day).
If your bot handles tasks that are repetitive vs one-time (like ordering a lunch vs. filing papers for retirement), you can use this chart to see how many of your users return to talking with it in time.
In the example above 261 users had talked with the bot during the week that starts on Sept 28. In the following week (the week of Oct 5 in the 1st column) only 2% of these people had talked to the bot again. In the third week - 1% and so on.
Understanding
The table shows the names of all entities you have defined in the cognitive flow as rows. For each entity there is information about:
- the number of abandoned sessions — in how many conversations the users stopped responding after the bot asked about that entity. In the example, 282 conversations were abandoned when the bot asked about time. This could mean a couple of things - a) the bot did not understand the time provided by the user, or b) the bot understood the time correctly but this specific time slot was not available for booking an appointment with a doctor.
- the understanding ratio — how many times the bot was able to understand the answer provided by the user as the proper entity type. In the example, the bot correctly extracted a time value in 632 out of 685 total conversations that had a step with the time entity. That makes for a 92% understanding ratio, which is good. To improve it further you will have to check some actual conversation logs from the History screen in the NativeChat web portal. This should give you an idea of why in 8% of the cases the bot has failed. People could have said something unexpected as time format, or maybe they didn’t want to respond with a time but asked something different.
Use this information to improve the training data of your bot so that it can recognize user input in more cases. Or make adjustments in your conversation flow if the questions the bot asks are not expected by the users. It could be that they don’t want to provide the answer the bot asks for because the question doesn’t make sense to them.
Interaction Paths
The chart shows the relative sizes of interaction paths (the steps through which a conversation between the bot and the user goes).
In the example above, many conversations start with the date question and then the path continues with the time, then doctor and so on steps.
Ideally, you would want users to go through each step only once and successfully reach the end of the conversation. The lengthier a path on the chart is, the more steps and repetitions were there in the conversation.
There are two more interaction path charts - with only the successful and only failed (abandoned) conversations. They can give you an idea of how the paths for these two types of conversations differ so that you can fix the failing ones.
Somethings missing or not clear?
Ask a question in our community forums or submit a support ticket.