If you consider creating a conversational agent there are mainly two options:

  • a constructive approach where the agent builds up an answer based on rules, hard-coded switches and domain-specific input.
  • a learned approach where the agent is taught to talk (much like a child) based on a massive amount of data and repetitive fine-tuning. This is in essence based on artificial neural networks and variations of it.

One can combine both approaches but this rarely happens it seems. Qwiery is such an example of an hybrid architecture and one can tune how much of one or the other is used to converse. Both approaches have pro and con, below you can find some thoughts related to this.

Constructive

Side-effects are easy to implement. Dispatching side-effects (e.g. recording things in a user’s agenda or task-list) is part of the general workflow. In contrast, linking side-effects in a neural network is both conceptually and technically not easy to do. Where and how to plug into the network is not something that is often addressed.

Extensibility is easy. Because a constructive approach really is only a matter of clean programming and good architecture it’s open to extensions. On the other hand, enabling a plugin architecture on top of (or next to) a neural network is in various ways a challenge.

Well-defined conversation or topic boundaries. It takes very little to, say, answer only questions of the general format ‘What is the time currently in $SomeLocation?‘. Doing something similar with neural networks is another matter. Constructive agents are great for specific tasks and contexts.

Learned

It requires a massive amount of data, special hardware and special software. While it’s easy to create a simple neural network and assemble some data, it requires special hardware and sofware to have realistic results. The CleverBot story is a good example: hundreds of billions of sentences from Google News, Wikipedia and other sources. It takes solid GPU hardware to process this amount and custom C code is needed to speed up things.

It requires experience to create a neural network. Creating good networks is as much a craft as it is science. Picking out a type and tuning the many parameters is something you have to do for a long time in order to gain intuition for what ‘works’. Things like long short-term memory, convolutional networks and many other technical topics are not easy to understand or to apply.

It’s compact and simple (if you made it). Once a network has learned the data and is set up you end up with a relatively small amount of code and a magic little brain. A constructive approach requires more dedication and lots of code to embrace all situations.

Language geometry is intrinsic. Words and sentences have to be encoded (aka embeddings) into a vector space (using e.g. word2vec) and this makes it easy to ask metrical questions or find word clusters (e.g. how similar are the words ‘number’ and ‘apple’ for a given corpus?). A typical constructive approach does not use embeddings but relies on the words itself and hence hasn’t the geometric articulation.

Evolution and adaptive. Neural networks can be configured to adapt to inflow of new data. Talking to a neural network changes the weights of the network. This is both a blessing and a curse however. Without a permanent form of supervision a learning bot can be turned into whather it is given (see below). An agent based on rules is on the other hand more rigid, it can learn things in a more controlled fashion. Qwiery learns things by filtering out data and storing things into the semantic network. It doesn’t mean it’s free of attacks but they are much less digested.

Hybrid

It’s difficult to imagine any type of (useful) agent without a complex mix of approaches. For one thing, there is spam and noise in the world one has to filter out or preprocess. Whether you have a domestic agent responding to children’s playful questions or a public messaging bot, people are noisy and sometimes downright evil. Conversational agents are very succeptible to attacks and manipulations. The story of Microsoft’s Tay bot is revealing in this respect.

Like anything else in life, a good business solution is a subtle and balanced mix of many things. A real-world application of agents is likely a recipe containing the following ingredients:

  • a memory which keeps track of a user’s input for the purpose of personalization and tuning of responses.
  • a learning mechanism which can handle unkonwn input and unknown situations. No agent will be ever complete unless you want it to cover only a predefined and bounded domain. Hence, it has to be programmed to handle undefined or unknown contexts.
  • a mechanism for self-protection (call it spam handling) and ultimately some form of ethical policy.
  • an extensible architecture to embed it into existing businesses, connected it to databases and whatnot.
  • a scalable (hardware and software) infrastructure
  • an open gateways to the world out there and able to tap from Wikipedia, search engines, weather data and so on
  • a mix of neural networks and programmed rules as well as a logic or a way to know how to dispatch one or the other approach

as well as various other things dictated by the business use cases at hand.

Qwiery is hybrid

Qwiery is effectively a hybrid framework which contains all of the aforementioned elements.

The table below summarizes features and different approaches. Of course, everything depends on precise definitions and needs. For instance, the ‘memory’ topic could mean ‘recursive neural network’ or ‘long short-term memory’ or simply a key-value store of properties. The table is in the first place an indication of Qwiery’s capabilities rather than a precise contrast between constructive and learned approaches.

Feature Constructive Learned Qwiery Comments
Memory Easy Moderate Easy Qwiery has a semantic network based on a proprietary graph database.
Extensibility Moderate Difficult Easy Qwiery was created with extensibility and ease of use in mind.
Neural networks Not applicable Easy Difficult Training and tuning a decent neural networks is never easy.
Multi-bot Moderate Difficult Easy Qwiery is multi-user and multi-bot out of the box.
Learning curve Easy Difficult Easy Neural networks is hard science, constructive agents are more akin to traditional software development.
Deductive reasoning Difficult Difficult Moderate Deductive reasoning is a discipline on its own. Qwiery infers things based on the semantic network.
Randomness Easy Moderate Easy Qwiery contains on many levels statistical distributions captured from its emotion engine, inferred user personality, acquired semantic knowledge, usage and so on.
Spam and noise Difficult Difficult Difficult Dealing with bad intentions and noise is always difficult in the context of machine learning.
Multilingual Difficult Easy Moderate
Domain boundary Easy Difficult Easy The ability to perform only within the context of a particular domain.
Emotion Moderate Difficult Easy Qwiery has a multi-user and multi-bot emotion engine.