{ "metadata": { "name": "", "signature": "sha256:f0e926c40954923308721ad65f8a417829292d4ba66a385c38c7d0d1ca29dfd1" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "##4. Doing Naive Bayes Classification\n", "\n", "### Lynn Cherny, 2/10/15, arnicas@gmail\n", "Full repo here: https://github.com/arnicas/NLP-in-Python" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is an example of going from labeled text to machine classification, first with NLTK and then the Python machine learning library scikit-learn. Examples updated from my OpenVis Conf talk here, which is more entertaining: https://www.youtube.com/watch?v=f41U936WqPM and slides: http://www.slideshare.net/arnicas/the-bones-of-a-bestseller\n", "\n", "###Warning: Rated NC-17. Using text samples from \"50 Shades of Gray\"! (Because spam is boring.)###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Inspired by this image from the Economist (orignally http://www.economist.com/blogs/graphicdetail/2012/11/fifty-shades-data-visualisations):\n", "\n", "\n", " \n", "I wondered if I could identify the sex scenes automatically, based on training examples. That's \"classification.\"\n", "\n", "Because the book was too hard to read, I farmed out (badly formatted) short chunks of \"50 Shades of Gray\" to Mechanical Turkers to rate as \"sexy\" or \"not\" (on a ratings scale): a score of 0 is \"not a sex scene\", while \"1\" and \"2\" are increasing in steaminess, and \"3\" is definitely a sex scene. (In later scoring, I reduced the options to just 3.) \n", "\n", "Assuming that a score of >= 2.5 is a sex scene, I put the scores and text into a usable file with that labeling." ] }, { "cell_type": "code", "collapsed": false, "input": [ "labelsfile = 'data/csv/fiftyshades_labeled.txt'" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 3 }, { "cell_type": "code", "collapsed": false, "input": [ "def get_documents_csv(filename):\n", " \"\"\" Read in the labeled chunks and classifications.\n", " Assume label is cell 1, doc text is cell 2, classification is cell 3.\n", " \"\"\"\n", " \n", " labels = []\n", " documents = []\n", " classif = []\n", " for line in open(filename):\n", " fields = line.split(\"\\t\")\n", " if fields[0].strip != 'label': # header row\n", " documents.append(fields[1].strip())\n", " labels.append(fields[0].strip())\n", " if fields[2]:\n", " classif.append(fields[2].strip(\"\\n\"))\n", " print \"Got\", len(documents), \"chunks\"\n", " return (documents, labels, classif)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 4 }, { "cell_type": "code", "collapsed": false, "input": [ "docs, labels, classes = get_documents_csv(labelsfile)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "Got 382 chunks\n" ] } ], "prompt_number": 5 }, { "cell_type": "code", "collapsed": false, "input": [ "# Hmm, in my classes I did categorize \"maybes.\" These can either be used as \"no\" or as a third class.\n", "classes[25:40]" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 6, "text": [ "['maybe\\r',\n", " 'yes\\r',\n", " 'maybe\\r',\n", " 'no\\r',\n", " 'maybe\\r',\n", " 'yes\\r',\n", " 'maybe\\r',\n", " 'maybe\\r',\n", " 'maybe\\r',\n", " 'no\\r',\n", " 'maybe\\r',\n", " 'maybe\\r',\n", " 'no\\r',\n", " 'yes\\r',\n", " 'maybe\\r']" ] } ], "prompt_number": 6 }, { "cell_type": "code", "collapsed": false, "input": [ "labels[25:40] # these are the original file chunks for reference" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 7, "text": [ "['fifty_500_166',\n", " 'fifty_500_199',\n", " 'fifty_500_247',\n", " 'fifty_500_365',\n", " 'fifty_500_138',\n", " 'fifty_500_338',\n", " 'fifty_500_348',\n", " 'fifty_500_364',\n", " 'fifty_500_84',\n", " 'fifty_500_366',\n", " 'fifty_500_85',\n", " 'fifty_500_368',\n", " 'fifty_500_56',\n", " 'fifty_500_92',\n", " 'fifty_500_276']" ] } ], "prompt_number": 7 }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I did do some hierarchical cluster analysis on it, btw (code elsewhere)- colored labels by green = yes, red = no (apologies to the color blind). This suggests I probably can build a classifier!\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###But let's build a classifier. Here's a schematic from Perkins for the machine learning workflow:\n", "\n", "\n", "\n", "Some references:\n", "* Source reference for the NLTK examples: http://www.nltk.org/book/ch06.html\n", "* Experiments by Jacob and output results: http://streamhacker.com/2010/10/25/training-binary-text-classifiers-nltk-trainer/\n" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# this text can't be used as is in the classifier -- and notice this was a \"maybe\":\n", "\n", "docs[25]" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 8, "text": [ "'\"any time , Anastasia. I won * t stop you. If you go , however * that * s it. Just so you know. * * Okay , * I answer softly. If I go , that * s it. The thought is surprisingly painful . The waiter arrives with our first course. How can I possibly eat ? Holy Moses * he * s ordered oysters on a bed of ice . * I hope you like oysters. * Christian * s voice is soft . * I * ve never had one. * Ever . * Really ? Well. * He reaches for one. * All you do is tip and swallow. I think you can manage that. * He gazes at me , and I know what he * s referring to. I blush scarlet. He grins at me , squirts some lemon juice onto his oyster , and then tips it into his mouth . * Hmm , delicious. Tastes of the sea. * He grins at me. * Go on , * he encourages . * So , I don * t chew it ? * * No , Anastasia , you don * t. * His eyes are alight with humor. He looks so young like this . I bite my lip and his expression changes instantly. He looks sternly at me. I reach across and pick up my first-ever oyster. Okay * here goes nothing. I squirt some lemon juice on it and tip it up. It slips down my throat , all sea water , salt , the sharp tang of citrus , and fleshiness * ooh. I lick my lips , and he * s watching me intently , his eyes hooded . * Well ? * * I * ll have another , * I say dryly . * Good girl , * he says proudly . * Did you choose these deliberately ? Aren * t they known for their aphrodisiac qualities ? * * No , they are the first item on the menu. I don * t need an aphrodisiac near you. I think you know that , and I think you react the same way near me , * he says simply. * So where were we ? * He glances at my e-mail as I reach for another oyster . He reacts the same way. I affect him * wow . * Obey me in all things. Yes , I want you to do that. I need you to do that. Think of it as role-play , Anastasia. * * But I * m worried you * ll hurt me. * * Hurt you how ? * * Physically. * And emotionally . * Do you really think I would do that ? Go beyond any limit you can * t take ? * * You * ve said you * ve hurt someone before. * * Yes , I have. It was a long\"'" ] } ], "prompt_number": 8 }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "I don't remember why my original text extracts had so much horrible * formatting - something to do with getting it to Excel for Mechanical Turk as fast as possible. Showing it to you as the raters saw it, for honestly purposes, and maybe it won't get me in trouble for sharing if it's so awful to read like that!\n", "\n", "Let's clean it up - here we are using a regex tokenizer to clean the garbage out. Also, making a class to store a bunch of stuff in!" ] }, { "cell_type": "code", "collapsed": false, "input": [ "class Document:\n", " def __init__(self):\n", " Document.words = []\n", " Document.original = \"\"\n", " Document.clean = \"\"\n", " Document.label = \"\"\n", " Document.classif = \"\"\n", "\n", "def clean_doc(doc):\n", " from nltk import corpus\n", " import re\n", " stopwords = corpus.stopwords.words('english')\n", " new = Document()\n", " new.original = doc\n", " sentence = doc\n", " sentence = sentence.lower()\n", " # note that I'm looking for non-numeric alphabetic items; this makes a difference from sklearn\n", " words = re.findall(r'\\w+', sentence, flags = re.UNICODE | re.LOCALE)\n", " new.clean = \" \".join(words)\n", " words = [word for word in words if word not in stopwords]\n", " new.words = words\n", " return new" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 9 }, { "cell_type": "code", "collapsed": false, "input": [ "# example ouput:\n", "\n", "clean_doc(docs[25]).clean" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 10, "text": [ "'any time anastasia i won t stop you if you go however that s it just so you know okay i answer softly if i go that s it the thought is surprisingly painful the waiter arrives with our first course how can i possibly eat holy moses he s ordered oysters on a bed of ice i hope you like oysters christian s voice is soft i ve never had one ever really well he reaches for one all you do is tip and swallow i think you can manage that he gazes at me and i know what he s referring to i blush scarlet he grins at me squirts some lemon juice onto his oyster and then tips it into his mouth hmm delicious tastes of the sea he grins at me go on he encourages so i don t chew it no anastasia you don t his eyes are alight with humor he looks so young like this i bite my lip and his expression changes instantly he looks sternly at me i reach across and pick up my first ever oyster okay here goes nothing i squirt some lemon juice on it and tip it up it slips down my throat all sea water salt the sharp tang of citrus and fleshiness ooh i lick my lips and he s watching me intently his eyes hooded well i ll have another i say dryly good girl he says proudly did you choose these deliberately aren t they known for their aphrodisiac qualities no they are the first item on the menu i don t need an aphrodisiac near you i think you know that and i think you react the same way near me he says simply so where were we he glances at my e mail as i reach for another oyster he reacts the same way i affect him wow obey me in all things yes i want you to do that i need you to do that think of it as role play anastasia but i m worried you ll hurt me hurt you how physically and emotionally do you really think i would do that go beyond any limit you can t take you ve said you ve hurt someone before yes i have it was a long'" ] } ], "prompt_number": 10 }, { "cell_type": "code", "collapsed": false, "input": [ "# clean them all...\n", "\n", "clean_docs = [clean_doc(x) for x in docs]" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 11 }, { "cell_type": "code", "collapsed": false, "input": [ "# Fix up with more info on each object:\n", "\n", "def add_ids_classes(doc_objs, labels, classes):\n", " # Go thru the objects we just made and add the corresponding class and label\n", " for i,x in enumerate(doc_objs):\n", " x.label = labels[i]\n", " x.id = i\n", " x.classif = classes[i].strip(\"\\r\") # may be necessary to strip, was for me\n", " return doc_objs" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 12 }, { "cell_type": "code", "collapsed": false, "input": [ "clean_docs = add_ids_classes(clean_docs, labels, classes)" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 13 }, { "cell_type": "code", "collapsed": false, "input": [ "clean_docs[0]\n", "clean_docs[0].classif" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 14, "text": [ "'maybe'" ] } ], "prompt_number": 14 }, { "cell_type": "code", "collapsed": false, "input": [ "# We will consider the \"maybe\" as no, for now:\n", "\n", "neg_docs = [doc for doc in clean_docs if doc.classif == 'no' or doc.classif == 'maybe']\n", "pos_docs = [doc for doc in clean_docs if doc.classif == 'yes']" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 15 }, { "cell_type": "code", "collapsed": false, "input": [ "print len(neg_docs), len(pos_docs)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "327 55\n" ] } ], "prompt_number": 16 }, { "cell_type": "code", "collapsed": false, "input": [ "# Bag of Words - just a True for each word's presence in a document. Later we'll use TF-IDF weights.\n", "\n", "def word_feats(words):\n", " return dict([(word, True) for word in words])" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 18 }, { "cell_type": "code", "collapsed": false, "input": [ "neg_words = [(word_feats(doc.words),'neg') for doc in neg_docs]\n", "pos_words = [(word_feats(doc.words),'pos') for doc in pos_docs]" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 19 }, { "cell_type": "code", "collapsed": false, "input": [ "# These are lists of dictionaries, one for each text. Here's the first \"no\" text:\n", "neg_words[0]" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 24, "text": [ "({'accomplishments': True,\n", " 'admiring': True,\n", " 'ana': True,\n", " 'apogee': True,\n", " 'arms': True,\n", " 'around': True,\n", " 'ask': True,\n", " 'asks': True,\n", " 'baby': True,\n", " 'back': True,\n", " 'bed': True,\n", " 'blue': True,\n", " 'breathes': True,\n", " 'care': True,\n", " 'cares': True,\n", " 'christian': True,\n", " 'come': True,\n", " 'comprehending': True,\n", " 'concern': True,\n", " 'confusion': True,\n", " 'conquest': True,\n", " 'course': True,\n", " 'covered': True,\n", " 'cries': True,\n", " 'crushes': True,\n", " 'd': True,\n", " 'depths': True,\n", " 'devours': True,\n", " 'didn': True,\n", " 'doubt': True,\n", " 'effect': True,\n", " 'energized': True,\n", " 'everywhere': True,\n", " 'exactly': True,\n", " 'exciting': True,\n", " 'eyes': True,\n", " 'face': True,\n", " 'far': True,\n", " 'favorite': True,\n", " 'feel': True,\n", " 'fifteen': True,\n", " 'film': True,\n", " 'find': True,\n", " 'finds': True,\n", " 'flown': True,\n", " 'fronts': True,\n", " 'frowns': True,\n", " 'fulfilling': True,\n", " 'full': True,\n", " 'funny': True,\n", " 'gape': True,\n", " 'good': True,\n", " 'gray': True,\n", " 'greatest': True,\n", " 'grey': True,\n", " 'grin': True,\n", " 'grinning': True,\n", " 'grins': True,\n", " 'happening': True,\n", " 'head': True,\n", " 'holy': True,\n", " 'hugging': True,\n", " 'idiot': True,\n", " 'incredulity': True,\n", " 'infectious': True,\n", " 'inside': True,\n", " 'invocation': True,\n", " 'king': True,\n", " 'lie': True,\n", " 'like': True,\n", " 'lips': True,\n", " 'looking': True,\n", " 'love': True,\n", " 'm': True,\n", " 'man': True,\n", " 'many': True,\n", " 'meant': True,\n", " 'mine': True,\n", " 'mirroring': True,\n", " 'miss': True,\n", " 'mr': True,\n", " 'naked': True,\n", " 'number': True,\n", " 'obvious': True,\n", " 'oh': True,\n", " 'one': True,\n", " 'orgasm': True,\n", " 'passion': True,\n", " 'passionate': True,\n", " 'piano': True,\n", " 'pillows': True,\n", " 'play': True,\n", " 'playroom': True,\n", " 'quirk': True,\n", " 'referring': True,\n", " 'release': True,\n", " 'right': True,\n", " 'ripping': True,\n", " 'sad': True,\n", " 'said': True,\n", " 'score': True,\n", " 'see': True,\n", " 'seventeen': True,\n", " 'sex': True,\n", " 'shakes': True,\n", " 'sheet': True,\n", " 'shining': True,\n", " 'shit': True,\n", " 'silly': True,\n", " 'sleep': True,\n", " 'sloshing': True,\n", " 'smiles': True,\n", " 'soft': True,\n", " 'soul': True,\n", " 'staring': True,\n", " 'steele': True,\n", " 'still': True,\n", " 'stirring': True,\n", " 'stop': True,\n", " 'strangely': True,\n", " 'stuff': True,\n", " 'suddenly': True,\n", " 'super': True,\n", " 'talk': True,\n", " 'thought': True,\n", " 'tired': True,\n", " 'today': True,\n", " 'touching': True,\n", " 'turbulent': True,\n", " 'um': True,\n", " 'unexpected': True,\n", " 'vanilla': True,\n", " 've': True,\n", " 'voice': True,\n", " 'want': True,\n", " 'whole': True,\n", " 'wild': True,\n", " 'women': True,\n", " 'wrapped': True},\n", " 'neg')" ] } ], "prompt_number": 24 }, { "cell_type": "code", "collapsed": false, "input": [ "# Let's make a cut point at 3/4 of each list, so we can do separate test and training runs.\n", "\n", "negcutoff = len(neg_words)*3/4\n", "poscutoff = len(pos_words)*3/4" ], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 25 }, { "cell_type": "code", "collapsed": false, "input": [ "# Now split up the lists into training and testing.\n", "\n", "import random\n", "\n", "random.shuffle(neg_words)\n", "random.shuffle(pos_words)\n", "\n", "train_fic = neg_words[:negcutoff] + pos_words[:poscutoff]\n", "test_fic = neg_words[negcutoff:] + pos_words[poscutoff:]\n", "\n", "print 'train on %d docs, test on %d docs' % (len(train_fic), len(test_fic))" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "train on 286 docs, test on 96 docs\n" ] } ], "prompt_number": 19 }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Naive Bayes \n", "\n", "Naive Bayes classifiers (they are a family) are pretty good on text. The Bayes in \"Naive Bayes\" refers to Bayes' Theorem:\n", "\n", "\n", "In English, we say:\n", "\n", "\n", "For texts, this means that the probability of a document being in a class (like, \"sex scene\") is based on the probability of that class in the training set (our prior) times the combined probabilities of the words in the new document appearing in that class. The \"winning\" class is the one with a higher score for the prior times the word-category probability. For a longer explanation of how this works, read [YHat's blog post](http://blog.yhathq.com/posts/naive-bayes-in-python.html).\n", "\n", "It is \"naive\" because it assumes no relationship between the features (words) in the data. It also requires relatively little data to train it, which is good for smaller data problems. But it also scales well to a lot of vocabulary.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "###The results are reasonably good..." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk.classify.util\n", "from nltk.classify import NaiveBayesClassifier\n", "classifier = NaiveBayesClassifier.train(train_fic)\n", "\n", "\n", "print 'accuracy:', nltk.classify.util.accuracy(classifier, test_fic)\n", "classifier.show_most_informative_features(15)" ], "language": "python", "metadata": {}, "outputs": [ { "output_type": "stream", "stream": "stdout", "text": [ "accuracy: " ] }, { "output_type": "stream", "stream": "stdout", "text": [ "0.8125\n", "Most Informative Features\n", " navel = True pos : neg = 25.4 : 1.0" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", " breasts = True pos : neg = 22.3 : 1.0\n", " clitoris = True pos : neg = 21.5 : 1.0\n", " groan = True pos : neg = 20.3 : 1.0\n", " beg = True pos : neg = 17.6 : 1.0\n", " eases = True pos : neg = 17.6 : 1.0\n", " voices = True pos : neg = 17.6 : 1.0\n", " upper = True pos : neg = 17.6 : 1.0\n", " peels = True pos : neg = 17.6 : 1.0\n", " stilling = True pos : neg = 17.6 : 1.0\n", " washing = True pos : neg = 17.6 : 1.0\n", " swirling = True pos : neg = 17.6 : 1.0\n", " stretching = True pos : neg = 17.6 : 1.0\n", " assault = True pos : neg = 17.6 : 1.0\n", " blows = True pos : neg = 17.6 : 1.0\n" ] } ], "prompt_number": 20 }, { "cell_type": "markdown", "metadata": {}, "source": [ "### An interface to Sklearn (scikit-learn) from NLTK is available (see Perkin's Python 3 and NLTK book from Packt)" ] }, { "cell_type": "code", "collapsed": false, "input": [ "from nltk.classify.scikitlearn import SklearnClassifier\n", "from sklearn.naive_bayes import MultinomialNB\n", "from nltk.classify.util import accuracy\n", "\n", "sk_classifier = SklearnClassifier(MultinomialNB())\n", "sk_classifier.train(train_fic)\n", "accuracy(sk_classifier, test_fic)" ], "language": "python", "metadata": {}, "outputs": [ { "metadata": {}, "output_type": "pyout", "prompt_number": 21, "text": [ "0.9583333333333334" ] } ], "prompt_number": 21 }, { "cell_type": "markdown", "metadata": {}, "source": [ "###Perkins discusses some of the differences in performance in his book. Basically, for machine learning problems, sklearn is highly optimized.\n", "\n", "Let's look at a visualization of the accuracy of one of the runs, from my [Openvis Conf talk](http://www.slideshare.net/arnicas/the-bones-of-a-bestseller):\n", "\n", "http://www.ghostweather.com/essays/talks/openvisconf/text_scores/rollover.html\n", "\n", "**Now, optionally, you can look at [notebook 5](5. Naive Bayes in Scikit-Learn.ipynb) on using sklearn to do the same things!**" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [], "prompt_number": 21 } ], "metadata": {} } ] }