Most chatbot platforms take a relatively simplistic approach to conversational analysis. The conversational designer is asked to defined possible intents a user may have and to illustrate those intents through example phrases.
For example, a conversational designer could be interested in capturing the intent of a user to “purchase shoes”. This high-level intent is fed into the conversational tool and example phrases such as “I want to buy some shoes”, “I am looking for a pair of shoes”, “I want shoes” are provided to seed the NLP system.
The NLP system will then attempt to expand on the example phrases when phased with user input. In the best of cases (but not always) it would even be able to match the phrase “I want to purchases boots” to the intent “purchase shoes”. It can do this because the NLP tool knows that the semantic distance between boots, shoes or even slippers is small - they all refer to the same category of things.
This level of intent matching is great for a lot of common chatbot interactions but it cannot deal with more complicated or novel domains.
Combining data-driven AI with model-driven AI
In order to solve this problem, we combine the brute force strength of data-driven AI with model-driven AI. In particular, instead of relying on intent matching as described above we also access the lower level analysis results (syntax analysis, entities, key phrases) and combine that with a knowledge graph of our domain. This knowledge graph (or ontology) connects intents, phrases, nouns and adjectives to concepts in our domain.
We analyse the input from the user and use the results to navigate the knowledge graph and find the best guess of what the user was trying to say at a semantic level. We use that information to determine what the most appropriate next question to the user should be. If we are able to confidently position the user input on the knowledge graph we can continue the conversation while if we are missing some context we can reply with a much more focussed follow-on question in order to reduce ambiguity.