{ "metadata": { "name": "", "signature": "sha256:7ca00f3963a4d77fb5a362f29f86b9fb4cf3171764c8498ffedc0b2d536bd216" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Today's session shows how you can build classifiers to respond to the gist or purpose of an utterance. Our techniques illustrate what's called [supervised machine learning][1]: finding generalizations from labeled examples that we can use to respond corretly to new, unlabeled examples. \n", "\n", "This session draws heavily on the [NLTK toolkit][2]. It has a bunch of convenient functions for doing supervised machine learning, which is helpful. But there are lots of other good Python utilities for doing machine learning, especially [scikit-learn][3], which gives access to a really wide variety of methods. NLTK's learning is more of a greatest hits compilation by comparison, but we're only going to use the simplest possible learning method: [naive Bayes classifiers][4]. What NLTK also has is a bunch of bundled training data: collections of language that have been marked up by hand to indicate the answers to important questions. In order to do machine learning, we need that kind of training data.\n", "\n", "We're going to focus on two problems that are particularly relevant for a chatbot. \n", "- The first is [sentiment analysis][5], which is understanding whether a comment conveys positive or negative information. The idea is that we'll be able to classify the user's input, and be able to respond appropriately with markers of feedback like _Cool!_ or _Too bad..._ Obviously you don't want to use these the wrong way!\n", "- The second is *dialogue act tagging*, a problem first explored in [this journal paper][6], which is understand the kind of contribution that somebody is making to a conversation with an utterance. It's not always easy to tell whether a statement is supposed to give new information or comment on something that's been said before, or whether a question is asking for clarification or proposing a new topic for conversation. However, there are some cues and patterns in text that we can learn from data, so that our chatbot can respond differently to these different moves.\n", "\n", "We start with the usual invocations of NLTK.\n", "\n", "[1]:http://en.wikipedia.org/wiki/Supervised_learning\n", "[2]:http://www.nltk.org/\n", "[3]:http://scikit-learn.org/stable/\n", "[4]:http://en.wikipedia.org/wiki/Naive_Bayes_classifier\n", "[5]:http://en.wikipedia.org/wiki/Sentiment_analysis\n", "[6]:http://www.aclweb.org/anthology/J/J00/J00-3003.pdf\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk, re\n", "from nltk.corpus import movie_reviews" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 1 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll start with sentiment analysis. This code has a few sources. The first is [Chapter 6 of the NLTK Book][1]. This gives a good overview of supervised learning and the interfaces that NLTK offers for dealing with the data. The second is [a series of blog posts by Jacob Perkins][2]. (I've given you the link to the first post in the series, but they're all linked together.) Perkins is a much better source about how to represent documents, select features and evaluate the results. I've also consulted [Chris Potts's tutorial about sentiment analysis][3] which is a nice mix of academic and practical.\n", "\n", "[1]:http://www.nltk.org/book/ch06.html\n", "[2]:http://streamhacker.com/2010/05/10/text-classification-sentiment-analysis-naive-bayes-classifier/\n", "[3]:http://sentiment.christopherpotts.net/index.html\n", "\n", "My goal here, by the way, is to show you some techniques that are known to have good results and illustrate the process of using machine learning to make decisions in a program. This is not the whole picture, and there's a lot more to say about how you go about using machine learning carefully and creatively for a new problem. Some of the shortcuts that I'm using here (like training on all the available data, without saving anything for development and testing) could lead to very bad reslults if we had to tinker to put the program together rather than just reusing techniques that other people have already worked out.\n", "\n", "For this classification problem, we represent a text snippet as a collection of features. The features in a document list the _informative_ words that _occur_ in the document. There are a couple choices here that are not obvious but are important.\n", "- We use _informative_ words so that the learning algorithm and the final classifier do not get confused by irrelevant information. Noisy features can make it harder for the learner to lock on to what really matters for a decision; it requires more search and more data to be able to find the real correlations in the presence of many unreliable features, so we'll just get rid of the stuff that's not likely to be useful. We'll measure informativeness statistically, using a $\\chi^2$ statistic derived from the prevalence of feature counts among positive or negative examples. The $\\chi^2$ statistic says how unlikely your observed feature counts are to have been derived by chance. The more unlikely they are, the higher the $\\chi^2$ statistic, and the better evidence the feature gives you about the true category of the data.\n", "- We only have features for the words that _occur_ in the document. The NLTK book suggests that you have features for the words that occur in the document _and_ for the words that do not occur in the document, but this does not work very well in our setting. Normally in supervised classification problem, the features are the most meaningful measurements that you can get automatically from your data to make a decision. If you're diagnosing a disease, for example, you might decide to do a particular blood test and then record whether the outcome of the test is positive or negative. Observing a word in a review isn't really like this. You'd like to know whether the reviewer thought the movie was, say, _interesting_ (which is probably a good thing) but that's not the same thing as whether the reviewer actually used the word _interesting_ in the review. The gap is particularly important in short utterances like you get in a chatbot - normally most words aren't going to be used because the utterances are short, so you don't really get any evidence that the user didn't think something just because they don't use that word. Naive Bayes models can handle this easily because the probability models apply when you observe any subset of features -- features that you haven't observed can just be ignored in your learning and decision making.\n", "\n", "I have packaged up the reasoning in a function called `compute_best_features`, which we'll use both to build our sentiment analyzer and our dialogue act tagger." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def compute_best_features(labels, feature_generator, n) :\n", " feature_fd = nltk.FreqDist()\n", " label_feature_fd = nltk.ConditionalFreqDist()\n", " \n", " for label in labels:\n", " for feature in feature_generator(label) :\n", " feature_fd[feature] += 1\n", " label_feature_fd[label][feature] += 1\n", " \n", " counts = dict()\n", " for label in labels:\n", " counts[label] = label_feature_fd[label].N()\n", " total_count = sum(counts[label] for label in labels)\n", " \n", " feature_scores = {}\n", " \n", " for feature, freq in feature_fd.iteritems():\n", " feature_scores[feature] = 0.\n", " for label in labels :\n", " feature_scores[feature] += \\\n", " nltk.BigramAssocMeasures.chi_sq(label_feature_fd[label][feature],\n", " (freq, counts[label]), \n", " total_count)\n", " \n", " best = sorted(feature_scores.iteritems(), key=lambda (f,s): s, reverse=True)[:n]\n", " return set([f for f, s in best])" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "This block of code computes the features and defines a function to extract the features corresponding to a list of words. You won't want to execute part of this block - it's a coherent unit of code - so it's commented inline. It takes a little while to run because it's going through the whole corpus, but it's not so slow for right now that it's worth pickling the best_word_list and loading it in later." ] }, { "cell_type": "code", "collapsed": false, "input": [ "stop_word_file = \"stop-word-list.txt\"\n", "with open(stop_word_file) as f :\n", " stop_words = set(line.strip() for line in f)\n", "\n", "def candidate_feature_word(w) :\n", " return w not in stop_words and re.match(r\"^[a-z](?:'?[a-z])*$\", w) != None\n", "\n", "def movie_review_feature_generator(category) :\n", " return (word\n", " for word in movie_reviews.words(categories=[category])\n", " if candidate_feature_word(word))\n", "\n", "best_sentiment_words = compute_best_features(['pos', 'neg'], movie_review_feature_generator, 2000)\n", " \n", "def best_sentiment_word_feats(words):\n", " return dict([(word, True) for word in words if word in best_sentiment_words])" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 3 }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're going to explore a few ways of doing the classification, so we'll put some infrastructure in place. First, we load in all the data as a `training_corpus` of `(word_list, category)` pairs. Then, we create a dummy Python class called `Experiment` that will let us package together comparable values made using different instantiations of the features and learning algorithms and play with the results." ] }, { "cell_type": "code", "collapsed": false, "input": [ "training_corpus = [(list(movie_reviews.words(fileid)), category)\n", " for category in movie_reviews.categories()\n", " for fileid in movie_reviews.fileids(category)]\n", "\n", "class Experiment(object) :\n", " pass" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our first experiment uses the `best_word_feats` that we've just computed - it understands the sentiment in the text based on the most informative words that occur.\n", "\n", "Here's the basic strategy for building and using the classifier:\n", "- Create a list of training pairs for the learner of the form `(feature dictionary, category label)`\n", "- Train a naive Bayes classifier on the training data\n", "- Write a feature extractor that will take raw text into a feature dictionary\n", "- Write a classification function that will predict the sentiment of raw text\n", "\n", "This also takes a moment to run as it scans through the corpus, makes the features, aggregates them into counts, and uses the counts to build a statistical model. Again, if it bugs you, you could pickle the classifier." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt1 = Experiment()\n", "expt1.feature_data = [(best_sentiment_word_feats(d), c) for (d,c) in training_corpus]\n", "expt1.opinion_classifier = nltk.NaiveBayesClassifier.train(expt1.feature_data)\n", "expt1.preprocess = lambda text : best_sentiment_word_feats([w.lower() for w in re.findall(r\"\\w(?:'?\\w)*\", text)])\n", "expt1.classify = lambda text : expt1.opinion_classifier.classify(expt1.preprocess(text)) " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 5 }, { "cell_type": "markdown", "metadata": {}, "source": [ "NLTK's `show_most_informative_features` method allows you to see what the classifier has learned. You can see that a big effect comes from adjectives and a few verbs that do express really strong opinions one way or the other." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt1.opinion_classifier.show_most_informative_features(20)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Most Informative Features\n", " avoids = True pos : neg = 13.0 : 1.0\n", " astounding = True pos : neg = 12.3 : 1.0\n", " slip = True pos : neg = 11.7 : 1.0\n", " outstanding = True pos : neg = 11.5 : 1.0\n", " ludicrous = True neg : pos = 11.0 : 1.0\n", " fascination = True pos : neg = 11.0 : 1.0\n", " insulting = True neg : pos = 11.0 : 1.0\n", " sucks = True neg : pos = 10.6 : 1.0\n", " hudson = True neg : pos = 10.3 : 1.0\n", " hatred = True pos : neg = 10.3 : 1.0\n", " seamless = True pos : neg = 10.3 : 1.0\n", " thematic = True pos : neg = 10.3 : 1.0\n", " dread = True pos : neg = 9.7 : 1.0\n", " conveys = True pos : neg = 9.7 : 1.0\n", " incoherent = True neg : pos = 9.7 : 1.0\n", " addresses = True pos : neg = 9.7 : 1.0\n", " annual = True pos : neg = 9.7 : 1.0\n", " accessible = True pos : neg = 9.7 : 1.0\n", " stupidity = True neg : pos = 9.0 : 1.0\n", " illogical = True neg : pos = 9.0 : 1.0\n" ] } ], "prompt_number": 6 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an example of sentiment detection in action." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt1.classify(\"The dinner was outstanding.\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 7, "text": [ "u'pos'" ] } ], "prompt_number": 7 }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Jacob Perkins recommends][1] including particularly important bigrams in the feature representation of each document. _Bigram_ is a fancy word for two words that occur successively in a document. NLTK's collocation finder selects bigrams that occur much more frequently than you would expect by chance - this is an indication that the two words together make an idiomatic expression for conveying a single concept that is important to the document.\n", "\n", "Here we repeat the usual pipleline to include 200 useful bigram features on each document.\n", "\n", "[1]:http://streamhacker.com/2010/06/16/text-classification-sentiment-analysis-eliminate-low-information-features/\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def best_bigram_word_feats(words, score_fn=nltk.BigramAssocMeasures.chi_sq, n=200):\n", " bigram_finder = nltk.BigramCollocationFinder.from_words(words)\n", " bigrams = bigram_finder.nbest(score_fn, n)\n", " d = dict([(bigram, True) for bigram in bigrams])\n", " d.update(best_sentiment_word_feats(words))\n", " return d\n", "\n", "expt2 = Experiment()\n", "expt2.feature_data = [(best_bigram_word_feats(d), c) for (d,c) in training_corpus]\n", "expt2.opinion_classifier = nltk.NaiveBayesClassifier.train(expt2.feature_data)\n", "expt2.preprocess = lambda text : best_bigram_word_feats([w.lower() for w in re.findall(r\"\\w(?:'?\\w)*\", text)])\n", "expt2.classify = lambda text : expt2.opinion_classifier.classify(expt2.preprocess(text)) " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 8 }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Perkins reports][1] that the bigram features do lead to a measurable performance improvement. In particular, adding bigrams improves the recall of negative classification, which means that the classifier is much better at reporting negative reviews that are truly negative when it is able to include some of these complex expressions. (Conversely, the classifier also improves the precision with which it recognizes positive reviews, which means that the things that it classifies as positive are more likely to actually be positive.) Probably this is due to the fact that the classifier can now recognize that _not good_ expresses a negative opinion... \n", "\n", "[1]:http://streamhacker.com/2010/05/24/text-classification-sentiment-analysis-stopwords-collocations/\n", "\n", "There are a bunch of interesting bigrams as informative features in the new classifier." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt2.opinion_classifier.show_most_informative_features(50)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Most Informative Features\n", " (u'give', u'us') = True neg : pos = 14.3 : 1.0" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", " avoids = True pos : neg = 13.0 : 1.0\n", " (u'quite', u'frankly') = True neg : pos = 12.3 : 1.0\n", " astounding = True pos : neg = 12.3 : 1.0\n", " (u'does', u'so') = True pos : neg = 12.3 : 1.0\n", " (u'fairy', u'tale') = True pos : neg = 11.7 : 1.0\n", " slip = True pos : neg = 11.7 : 1.0\n", " (u'&', u'robin') = True neg : pos = 11.7 : 1.0\n", " outstanding = True pos : neg = 11.5 : 1.0\n", " ludicrous = True neg : pos = 11.0 : 1.0\n", " insulting = True neg : pos = 11.0 : 1.0\n", " (u'batman', u'&') = True neg : pos = 11.0 : 1.0\n", " fascination = True pos : neg = 11.0 : 1.0\n", " (u'well', u'worth') = True pos : neg = 11.0 : 1.0\n", " sucks = True neg : pos = 10.6 : 1.0\n", " (u'was', u'made') = True neg : pos = 10.3 : 1.0\n", " hatred = True pos : neg = 10.3 : 1.0\n", " thematic = True pos : neg = 10.3 : 1.0\n", " hudson = True neg : pos = 10.3 : 1.0\n", " seamless = True pos : neg = 10.3 : 1.0\n", " (u'your', u'time') = True neg : pos = 9.7 : 1.0\n", " addresses = True pos : neg = 9.7 : 1.0\n", " incoherent = True neg : pos = 9.7 : 1.0\n", " conveys = True pos : neg = 9.7 : 1.0\n", " dread = True pos : neg = 9.7 : 1.0\n", " (u'red', u'planet') = True neg : pos = 9.7 : 1.0\n", " accessible = True pos : neg = 9.7 : 1.0\n", " (u'dealing', u'with') = True neg : pos = 9.7 : 1.0\n", " annual = True pos : neg = 9.7 : 1.0\n", " stupidity = True neg : pos = 9.0 : 1.0\n", " (u'that', u'will') = True pos : neg = 9.0 : 1.0\n", " reminder = True pos : neg = 9.0 : 1.0\n", " mulan = True pos : neg = 9.0 : 1.0\n", " (u'about', u'an') = True neg : pos = 9.0 : 1.0\n", " excruciatingly = True neg : pos = 9.0 : 1.0\n", " fairness = True neg : pos = 9.0 : 1.0\n", " illogical = True neg : pos = 9.0 : 1.0\n", " (u'about', u'it') = True neg : pos = 9.0 : 1.0\n", " gump = True pos : neg = 9.0 : 1.0\n", "(u'best', u'supporting') = True pos : neg = 9.0 : 1.0\n", " frances = True pos : neg = 9.0 : 1.0\n", " sans = True neg : pos = 9.0 : 1.0\n", " deft = True pos : neg = 9.0 : 1.0\n", " (u'ed', u'harris') = True pos : neg = 9.0 : 1.0\n", " (u'be', u'funny') = True neg : pos = 9.0 : 1.0\n", " winslet = True pos : neg = 9.0 : 1.0\n", " (u'saving', u'grace') = True neg : pos = 8.6 : 1.0\n", " scum = True pos : neg = 8.3 : 1.0\n", " (u'makes', u'no') = True neg : pos = 8.3 : 1.0\n", " predator = True neg : pos = 8.3 : 1.0\n" ] } ], "prompt_number": 9 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we turn to dialogue act tagging. NLTK comes with [a collection of text chat utterances that were collected by Craig Martell and colleagues at the Naval Postgraduate School.][1] These items have been hand annotated with a number of categories indicating the different roles the utterances play in a conversation. The list of tags appears here. The best way to understand what the tags mean is to see an example utterance from each class, so running this code also prints out some examples. The examples also show what the corpus is like -- including the way user names have been anonymized...\n", "\n", "[1]:http://faculty.nps.edu/cmartell/NPSChat.htm\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "chat_utterances = nltk.corpus.nps_chat.xml_posts()\n", "\n", "dialogue_acts = ['Accept', \n", " 'Bye', \n", " 'Clarify', \n", " 'Continuer', \n", " 'Emotion', \n", " 'Emphasis', \n", " 'Greet', \n", " 'nAnswer', \n", " 'Other', \n", " 'Reject', \n", " 'Statement', \n", " 'System', \n", " 'whQuestion', \n", " 'yAnswer', \n", " 'ynQuestion']\n", "\n", "for a in dialogue_acts :\n", " for u in chat_utterances :\n", " if u.get('class') == a:\n", " print \"Example of {}: {}\".format(a, u.text)\n", " break" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Example of Accept: 10-19-20sUser7 is a gay name.\n", "Example of Bye: brb\n", "Example of Clarify: sho*\n", "Example of Continuer: and i dont even know what that means.\n", "Example of Emotion: :P\n", "Example of Emphasis: i thought of that!\n", "Example of Greet: hey everyone \n", "Example of nAnswer: no " ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "Example of Other: 0" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "Example of Reject: don't golf clap me.\n", "Example of Statement: now im left with this gay name\n", "Example of System: PART\n", "Example of whQuestion: whats everyone up to?\n", "Example of yAnswer: yes 10-19-20sUser30\n", "Example of ynQuestion: any ladis wanna chat? 29 m\n" ] } ], "prompt_number": 10 }, { "cell_type": "markdown", "metadata": {}, "source": [ "This kind of language is pretty different from the edited writing that many NLP tools assume. Obviously, for machine learning, it hardly matters what the input to the classifier is. But it does pay to be smarter about dividing the text up into its tokens (the words or other meaningful elements). So we'll load in [the tokenizer that Chris Potts wrote][1] to analyze twitter feeds. Some of the things that it does nicely:\n", "- Handles emoticons, hashtags, twitter user names and other items that mix letters and punctuation\n", "- Merges dates, URLs, phone numbers and similar items into single tokens\n", "- Handles ordinary punctuation in an intelligent way as well\n", "\n", "[1]:http://sentiment.christopherpotts.net/tokenizing.html" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from happyfuntokenizing import Tokenizer\n", "chat_tokenize = Tokenizer(preserve_case=False).tokenize" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 11 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we set up the features for this data set. The code is closely analogous to what we did with the sentiment classifier earlier. The big difference is the tokenization and stopword elimination. Content-free words and weird punctuation bits like `what` and `:)` are going to be very important for understanding what dialogue act somebody is performing so we need to keep those features around!" ] }, { "cell_type": "code", "collapsed": false, "input": [ "def chat_feature_generator(category) :\n", " return (word\n", " for post in chat_utterances\n", " if post.get('class') == category\n", " for word in chat_tokenize(post.text))\n", "\n", "best_act_words = compute_best_features(dialogue_acts, chat_feature_generator, 2000)\n", " \n", "def best_act_word_feats(words):\n", " return dict([(word, True) for word in words if word in best_act_words])\n", "\n", "def best_act_words_post(post) :\n", " return best_act_word_feats(chat_tokenize(post.text))" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 12 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here again is the setup to build the classifier and apply it to novel text. No surprises here." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt3 = Experiment()\n", "expt3.feature_data = [(best_act_words_post(p), p.get('class')) for p in chat_utterances]\n", "expt3.act_classifier = nltk.NaiveBayesClassifier.train(expt3.feature_data)\n", "expt3.preprocess = lambda text : best_act_word_feats(chat_tokenize(text))\n", "expt3.classify = lambda text : expt3.act_classifier.classify(expt3.preprocess(text)) " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 13 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a little glimpse into what this classifier is paying attention to." ] }, { "cell_type": "code", "collapsed": false, "input": [ "expt3.act_classifier.show_most_informative_features(20)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Most Informative Features\n", " hi = True Greet : System = 486.1 : 1.0" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", " bye = True Bye : Statem = 460.6 : 1.0\n", " part = True System : Statem = 351.4 : 1.0\n", " brb = True Bye : Statem = 341.4 : 1.0\n", " no = True nAnswe : System = 318.1 : 1.0\n", " yes = True yAnswe : Emotio = 287.8 : 1.0\n", " nope = True nAnswe : Statem = 276.4 : 1.0\n", " are = True whQues : System = 228.5 : 1.0\n", " wanna = True ynQues : System = 192.7 : 1.0\n", " > = True Other : System = 170.7 : 1.0\n", " u = True whQues : System = 162.7 : 1.0\n", " what = True whQues : Emotio = 158.2 : 1.0\n", " nite = True Bye : Statem = 157.1 : 1.0\n", " tc = True Bye : Statem = 157.1 : 1.0\n", " lol = True Emotio : System = 156.5 : 1.0\n", " right = True Accept : System = 146.3 : 1.0\n", " whats = True whQues : Statem = 145.2 : 1.0\n", " any = True ynQues : Greet = 144.4 : 1.0\n", " 0 = True Other : Statem = 139.1 : 1.0\n", " and = True Contin : Emotio = 137.6 : 1.0\n" ] } ], "prompt_number": 14 }, { "cell_type": "markdown", "metadata": {}, "source": [ "This demonstration wouldn't be complete without an illustration of how to use the classifiers we've created for an actual chatbot. We've already seen a whole bunch of ways to produce the content of the response -- I won't repeat that here. What's interesting here is to show how you can use the classification results in coherent ways to shape the course of the conversation.\n", "\n", "The strategy I illustrate here is to have a different response generator for each of the different dialogue act types. Each response generator gets the input text (that's not used here, but you'd have to use it to make a pattern-matching response or an information-retrieval response like we've seen ealier). It also gets the recognized sentiment of the input text as an argument, so it can potentially do something different depending on whether the input is recognized as expressing a positive opinion or a negative opinion.\n", "\n", "I store the response generators in a dictionary -- Python doesn't have a `switch` statement like C or Java, but it does have first class functions. That makes an array of functions the easiest way to choose a range of behavior conditioned on a value from a small set of possibilities (like the set of dialogue acts). So the basic pattern of a response is to classify the act and sentiment of the input, and then call the response generator for the recognized act with the original text and the recognized sentiment. \n", "\n", "Obviously, this is just an invitation to take this further...." ] }, { "cell_type": "code", "collapsed": false, "input": [ "def respond_question(text, valence) :\n", " if valence == 'pos' :\n", " return \"I wish I knew.\"\n", " else :\n", " return \"That's a tough question.\"\n", " \n", "def respond_other(text, valence) :\n", " return \":P Well, what next?\"\n", "\n", "def respond_statement(text, valence) :\n", " if valence == 'pos' :\n", " return \"Great! Tell me more.\"\n", " else :\n", " return \"Ugh. Is anything good happening?\"\n", " \n", "def respond_bye(text, valence) :\n", " return \"I guess it's time for me to go then.\"\n", "\n", "def respond_greet(text, valence) :\n", " return \"Hey there!\"\n", "\n", "def respond_reject(text, valence) :\n", " if valence == 'pos' :\n", " return \"Well, if you insist!\"\n", " else :\n", " return \"I still think you should reconsider.\"\n", " \n", "def respond_emphasis(text, valence) :\n", " if valence == 'pos' :\n", " return '!!!'\n", " else :\n", " return \":(\"\n", " \n", "\n", "responses = {'Accept': respond_other, \n", " 'Bye': respond_bye, \n", " 'Clarify': respond_other, \n", " 'Continuer': respond_other, \n", " 'Emotion': respond_other, \n", " 'Emphasis': respond_emphasis, \n", " 'Greet': respond_greet, \n", " 'nAnswer': respond_other, \n", " 'Other': respond_other, \n", " 'Reject': respond_reject, \n", " 'Statement': respond_statement, \n", " 'System': respond_other, \n", " 'whQuestion': respond_question, \n", " 'yAnswer': respond_other, \n", " 'ynQuestion': respond_question}\n", "\n", "def respond(text) :\n", " act = expt3.classify(text)\n", " valence = expt1.classify(text)\n", " return responses[act](text, valence)\n", " " ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 15 }, { "cell_type": "code", "collapsed": false, "input": [ "respond(\"Everything sucks\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 16, "text": [ "'Ugh. Is anything good happening?'" ] } ], "prompt_number": 16 }, { "cell_type": "code", "collapsed": true, "input": [ "respond(\"I've got fantastic news!\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 17, "text": [ "'!!!'" ] } ], "prompt_number": 17 }, { "cell_type": "code", "collapsed": false, "input": [ "respond(\"A hot cup of tea always makes me happy.\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 18, "text": [ "'Great! Tell me more.'" ] } ], "prompt_number": 18 }, { "cell_type": "code", "collapsed": false, "input": [ "respond(\"Did you hear what happened to me?\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 19, "text": [ "\"That's a tough question.\"" ] } ], "prompt_number": 19 }, { "cell_type": "code", "collapsed": false, "input": [ "respond(\"brb\")" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 20, "text": [ "\"I guess it's time for me to go then.\"" ] } ], "prompt_number": 20 } ], "metadata": {} } ] }