8 min read

How Has Chatbot Technology Evolved Over The Years

Share on WhatsAppShare via email

At the end of season two of the hit HBO show Westworld, a robotic character offers an unnerving view on humanity: if you wanted to build a virtual person, you wouldn’t need much code. Just a few hundred lines of simple interactional goals.

That might sound like a huge understatement of human nature, but it’s an exciting, optimistic take on what we need from the conversation. And it resonates strongly with the history of the conversational software applications that we now call chatbots. Let’s find out what were the most notable technological advancements in this space.

The Touring Test

In 1950, Alan Touring published a paper in which he proposed the "The Imitation Game", now called The Touring Test. In this test, a human is asked to find out which of three people hidden from his view - two humans and one machine - was a computer.

Alan Turing's Motivation

In the late thirties, both Turing and Alonzo Church independently came up with mathematical formulations for computing machines, known today as the Turing Machine and Lambda Calculus. However, they also realised that their inventions couldn't do anything but calculate numbers at that time. This prompted Alan Turing to ask "Can machines think?" and define what kind of thing it is that we want a machine to do if we want to say that "machines can think".

The Imitation Game as Described by Alan Touring

Alan Turing described an application for future computing machines in his paper "Computing Machinery and Intelligence" where he imagines a human player communicating with two other players via typewriter - one being a man and the other a woman, but unknown to him.

He tells the human player which of the two others seems most human after having been engaged in conversation with them for 5 minutes. In this way, Turing wanted to imitate how a man would talk to his friend about what kind of girl he met at the local disco.

Basic Principles Behind The Touring Test

The point behind The Turing Test was to determine whether thinking in terms of intelligence could be translated to thinking about machines. This would lead to a criterion for when we deem a machine intelligent - if it wins "The Imitation Game". However, this required further clarification. A solution was developed in which the computer can converse with a human player for five minutes before it tries to impersonate one of them. If it is successful, then we would have to assume that the computer is intelligent.

ELIZA

ELIZA was created between 1964-1966. It was one of the first programs that implemented natural language processing and could carry a conversation, although it only roughly followed scripts programmed by its creator Joseph Weizenbaum — a German American computer scientist and a professor at Massachusetts Institute of Technology (MIT).

ELIZA was not written using any sort of AI programming. Instead, the program relied on pattern matching techniques — exchanging one word for another. In this way, it could recognise when a word in the script was being entered, and give a response relevant to that, but it rarely had anything to say that’s new, or uniquely relevant, or even specific to the user. Many answers were generic: “Can you tell me more about that?” or, “That sounds interesting!”.

The scripts were written by hand, and only allowed the system to handle five different sentence structures, but ELIZA fired the imaginations of thousands of scientists and computer enthusiasts. It was the first program to pass the Turing test, with many users initially mistaking it for a real person. Even when they learned the truth, many kept talking to ELIZA, not because they thought the program was human, but because they knew it wasn’t. They were delighted by ELIZA’s unplanned insights, unintentional humour, and accidental brilliance.

SHROLU

Terry Winograd was the first person to receive a Ph.D. for work done in Lisp and artificial intelligence research, and his thesis "Three Models of Language Understanding" laid important groundwork for future AI researchers. One of his most famous programs is SHROLU, an early natural language understanding system that worked well in restricted "block worlds".

SHROLU was written as a dissertation project at MIT between 1968 and 1970. It was designed to understand commands given in ordinary English, such as "Put the blocks into the box", or "Build a house". The program described the state of situations as described by sentences using predicate calculus, and it used pattern matching rules to combine sentences into larger ones.

This worked well for simple commands, but SHROLU was far too limited to be truly useful. It would not work at all in the real world since it could not specify objects not included in its knowledge base. When asked to carry an object that it did not know about, or asked to pick up a block that had already been "put the box", SHROLU simply refused to understand. Despite this, it was a very influential program that led to the development of new research directions and influenced the entire field of Artificial Intelligence.

Jabberwacky

Rollo Carpenter has developed several patents while working at IBM including the concept of a "self-learning activity system". After leaving IBM, Rollo decided to work on a chatbot that would be able to emulate natural human chat by using his knowledge in the fields of natural language processing and artificial intelligence. This chatbot would later become known as Jabberwacky, a chatbot that simulates human conversation in an interesting, humorous, and entertaining manner.

Jabberwacky is Rollo's most popular project and the one he is best known for in the world of AI programming. The chatbot uses various techniques to simulate natural human chat such as word associations and scripted responses. Jabberwacky learns new phrases by associating words that regularly appear together in conversations with one another and also regularly replies with phrases it has already learned. This enables the chatbot to maintain a consistent personality and improve upon incorrect or ungrammatical sentences while still staying convincing enough to maintain a natural conversation.

AIML & ALICE

Artificial Intelligence Markup Language (AIML) was developed by Richard Wallace starting in 1983 as an attempt to develop a general thinking machine that could understand human speech. This became the basis for the Artificial Linguistic Internet Computer Entity (ALICE), which came to life in 1995. It was a bot program that can reason and communicate using natural language to a degree that is unparalleled in any other to date.

AIML was designed with a formal syntax so that it could be parsed by a machine allowing it to process information faster and more efficiently. The AIML knowledge bases serve as a "dictionary" of sorts allowing the AI to quickly reference information in memory without having to sort through all of it like an organic brain would (it can't forget things).

AIML has grown and grown into a complicated web of expressions that allow for extremely complex conversations. Widespread use of ALICE is no longer limited to chatbots on the internet but is now being installed into robots understanding voice commands and dialogue with humans in a realistic way that appears intelligent, almost indistinguishable from human intelligence.

SmarterChild

The SmarterChild bot was a popular chatbot originally distributed via SMS and instant messaging services such as AIM, MSN Messenger, Yahoo! Messenger, and Google Talk. It was created by ActiveBuddy, and first released in 2001. The bot originally handled practical jokes ("What is my IP?", "How many people are using this service") before evolving to become a general-purpose "helper" for users of Instant Messaging services - giving information, sports scores, movie times, weather forecasts, stock market quotes and answers to many other questions.

SmarterChild was an early example of a large-scale botnet. During its heyday in 2002, it had 120 servers with over 1 million simultaneous users, with sessions running 24 hours per day, 7 days per week. It prompted half of all instant messages sent on MSN Messenger in 2002. One of the most popular features was SmarterChild's ability to answer questions, which would often result in a generic error message when it could not find an answer. According to ActiveBuddy, this feature allowed the botnet structure to be supported indefinitely without any human intervention or moderation by using crowdsourcing techniques.

IBM Watson

IBM Watson was built to exhibit reasoning and contextual understanding about the popular TV show Jeopardy! clues, including complex queries and slang or coined phrases. It was created using three different information sources: human-annotated data, encyclopedic knowledge bases, and statistical machine learning approaches.

First, human-annotated data was amassed from Wikipedia and WordNet, such as the relationships between words (synonyms, antonyms, hypernyms) and relationships between concepts. The data also included relationships between phrases, word sequences that were advantageous in one context but not another (i.e., idioms).

Then Watson was fed several encyclopedic knowledge bases, including dictionaries and thesaurus. Each of these sources was hand-selected for its usefulness in understanding Jeopardy! clues. Over time, Watson has been able to increase its understanding by continuously playing itself thousands of times over and learning from its own mistakes (a process called reinforcement learning). This self-training allows the system to continuously retrain itself as it is exposed to more data and as new sources of information become available.

Today, Watson has an average precision score of 82% at 0.8 seconds per response, compared to 65% for human contestants (although some humans have scored above 90%). This increase in accuracy and speed is largely due to Watson's ability to answer questions in a fraction of the time that it takes humans.

Conclusion

The history of chatbots has gone a long way from the early versions to the modern voice-enabled bots. Initially, they were programmed with scripts containing patterns and keywords that matched some basic human conversational skills but have been infused with artificial intelligence over time. This enabled the creation of voice-enable, smart, bots that are available on multimodal devices and help us with everyday tasks.

Share on WhatsAppShare via email

Join our newsletter

We care about the protection of your data. Check our privacy policy.