a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Chatbots and UX: Game Boy for Generation Z

Blog: Chatbots and UX: Game Boy for Generation Z


Photo by Mike Meyers on Unsplash

Over time, the communicative relationship between chatbots and humans has become more authentic. Soon enough, programmers and users alike hope the interactions — mostly at the moment exclusively in customer service and simple messenger apps — will expand into realms far more exciting, both in terms of UX and how chatbots can serve our needs in the world. We look now, though briefly, at the operational crux of chatbots and the different ways they can operate using the sophisticated modalities currently available to computer scientists and bot developers.

Citizens of Ludditania

As with other forms of disruptive technology, Chatbots are permeating our very lives in ways we can’t even imagine. This guarantees they will dominate the technological path we are heading down over the next decade and far beyond. For some people, though — the non-tech savvies of Ludditania, superfans of The Goonies and of Game Boy consoles in the past— there are only questions with no answers.

So, for those amongst us, willful of knowledge but lacking in any, whose only questions are ‘what are they, how do they work and how do they learn?’, here are some attempted answers.

What is a Chatbot?

The word chatbot is short for the word ‘chat robot’. This form of artificial intelligence (AI) is used to conduct a conversation between the bot and its human interlocutor, either verbally or through written input. They come in all variations and forms. Some of the most popular examples are Microsoft’s Siri, Amazon’s Alexa and Google Assistant, with other types being used on website chat rooms and food order apps. With them, the user can ask simple questions and receive answers.

ELIZA, the first chatot

The history of chatbots, surprisingly, is quite storied. The trailblazer was ELIZA, created at the MIT Computer Science and Artificial Intelligence Laboratory by Joseph Weizenbaum, in 1966, though the German computer scientist disclaimed ELIZA possessed genuine intelligence. Naturally, from that first development, they have got better and better.

Now chatbots have many functions, from being used in social media to websites. When you use Facebook Messenger, you are interacting with a chatbot.

One of the key features and main areas where research in chatbots is going is how to make a chatbot sound as much like a human speaker as possible. At the moment many of the chatbots — although using human voices — have a clunky sound to them, especially in regard to the linking, free-flow of words, giving the game away that you are talking to a machine. In the future, it is hoped, this automated aspect will inevitably be improved, offering human interlocutors the experience that the chatbot is, in fact, a human.

Now, won’t that be amazing.

There is still some way to go to make a chatbot’s voice sound as conversational and natural as a human’s, but we are getting there.

Slowly.

The Science Behind A Chatbot. What Makes Them Tick and How do They Function?

The operational modality of a chatbot is simple in its essence. A chatbot works by the amalgamation of the process of Natural Language Processing (NLP) and Machine Learning (ML). With this integration, A chatbot is then able to pick up the nuances of a human’s speech, which must take into consideration such things as regional dialects, slang phrases and other oddities unique to human-based languages. At the same time, the NLP and ML must also segregate special words, or keywords, which the chatbot uses as themes for conversational dialogues and verbal interactions. With the NLP and ML working together a knowledge base is created, which can then retrieve as stored data of the natural language spoken by the human interlocutor.

Source: digitaltrends.com

The theory of this is much more difficult than the practical application. A quick example, I’m sure, will suffice to clarify this process:

Just imagine a person wants to talk to a chatbot in McDonald’s or any other fast food restaurant. The person could type in thus:

“I want a hamburger.”

A straightforward enough sentence in the Present Simple tense. We have a subject, ‘I’, a verb, ‘want’ and an object, ‘hamburger’. This constitutes the basic syntactical structure of a sentence in the English language.

From this statement, the chatbot would be able to cognize (if programmed correctly) the verb ‘want’ as the determiner and the noun ‘hamburger’ as the thematic point of the whole sentence. From the knowledge base created by the NLP and ML, the chatbot would now be able to deduce from the data given if a ‘hamburger’ is, in fact, for sale and available in the restaurant to be ordered.

The retrieval from the knowledge base operates smoothly when there is only one order or request. Things become more complicated when there are multiple requests made by the human user. In this scenario, the chatbot has to decide which order/request is more important by reference to statistical data in its knowledge base or by offering both options simultaneously.

At the moment many chatbots still have trouble dealing with multiple questions joined by conjunctions such as ‘and’. They deal with these complex scenarios by responding to the first question in the syntactical structure while very often ignoring the rest, so a request in the form of: ‘I want to buy a hamburger but with fries and a Sprite?’ could be interpreted by the chatbot as ‘I want a hamburger only, as the conjunctions ‘and’ and ‘but’ add unnecessary complexity to the demand. Other requests would have to be asked as simple questions for easy retrieval from the chatbot’s knowledge base.

What Kind of Chatbots Are There?

There are two main operational modalities of chatbots, called Generative and Retrieval, and both operate in different ways while requiring distinct operational inputs and data sets.

The Generative-Based Model:

The generative chatbot model does not operate using predefined responses within its knowledge base. It functions by the collated data from a finite number of past conversations between itself and its interlocutor. This type of chatbot is good for casual conversations but many of the responses can feel unnatural and seemingly out of step with the subject at hand. This illogicality can also harm the natural flow of the exchange, and lead to frustration, boredom and the eventual termination of the interaction between human and chatbot.

How the generative bot model works. Source: pavel.surmenok.com

Another negative aspect of this model is the frequency of both syntactical and grammatical mistakes which occur through the random exposure the chatbot has had of human interactions stored in its knowledge base. The huge amount of data required to sustain the operational efficacy of a generative chatbot is another drawback. Inputting and retrieving such data can be time-consuming and not altogether practical.

The Retrieval-Based Model:

Working in a deeply contrasting way to the generative chatbot is the retrieval variation, which operates on the premise of graphs, statistical sets and directed flows. A ranking system is at the heart of its functionality. It is trained to order and categorize the most suitable responses from a finite of options inputted into its knowledge base, retrieving them when required during human interaction.

How the retrieval bot model works. Source: dzone.com

This kind of chatbot is the most commonly used in customer services and other UX applications. The data inputted into the knowledge base can be specified to react to certain questions while giving predictable responses.

A Mixture of the Two:

Although the two operating systems have their advantages, as is always the case in different aspects of technology, downfalls do exist. To combat this, it is good to design a chatbot which possesses the benefits of both. This comes in the form of the Retrieval-Generative model. By applying these two options together, a form of chatbot can be created which becomes multifaceted and can best serve the customers’ needs.

Constructing a chatbot that self learns using the generative and retrieval operating systems:

To train a bot can be done by a human themselves using the generative-based system by inputting designated answers to specific questions not already in the chatbot’s knowledge base. The negative side to this method is users with bad intentions can retrain the chatbot to respond to queries with hateful, and even racist, slurs or other misdemeanors, reprogrammed for ill will and to popularize agitprop and other unfortunate forms of propaganda, not something that would market this model of chatbot in the best light.

On the other hand, the retrieval-based model, which as already discussed, uses a ranking system to categorize responses from requests, and would require the chatbot to be trained to specific areas of communication. Developers of such chatbots would have to program the bot to cope with new intentions, or new ‘intents’, not already in the knowledge base.

This is only the bare bones of what a chatbot is and how it works. Yet, hopefully, and with developments in this area of AI moving forward at a steady pace, it won’t be too long in the future that the communicative interaction between chatbots and their human interlocutors will be a great UX.

Source: Artificial Intelligence on Medium

(Visited 35 times, 1 visits today)
Post a Comment

Newsletter