{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "In this document, we will use 20newsgroup dataset as a corpus to practice:\n", "- data collection\n", "- feature extraction\n", "- model training\n", "- model evaluation\n", "\n", "reference: http://blog.csdn.net/qq_35082030/article/details/70211552" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['alt.atheism', 'comp.graphics', 'sci.med', 'soc.religion.christian']\n" ] } ], "source": [ "from sklearn.datasets import fetch_20newsgroups\n", "categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']\n", "twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)\n", "print twenty_train.target_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The original function is:\n", "fetch_20newsgroups(data_home=None,subset='train',categories=None,shuffle=True,random_state=42,remove=(),download_if_missing=True)\n", "\n", "- subset contains three datasets: train, test, all.\n", "- categories means the category of news. If categories are specifed in the parameter, the specified categories will be extracted. Otherwise, all categories will be extracted.\n", "- shuffle means mess up the oder\n", "- remove some stopwords like header, footer, from, etc.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature extraction\n", "- CountVectorizer\n", "- tf-idf" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from sklearn.feature_extraction.text import CountVectorizer\n", "count_vect = CountVectorizer()\n", "X_train_counts = count_vect.fit_transform(twenty_train.data)\n", "\n", "from sklearn.feature_extraction.text import TfidfTransformer\n", "tfidf_transformer = TfidfTransformer()\n", "X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Original CountVectorizer class is:\n", "\n", "class sklearn.feature_extraction.text.CountVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=)\n", "\n", "### Important parameters:\n", "- token_pattern:表示token的正则表达式,需要设置analyzer == 'word',默认的正则表达式选择2个及以上的字母或数字作为token,标点符号默认当作token分隔符,而不会被当作token\n", "- max_df:可以设置为范围在[0.0 1.0]的float,也可以设置为没有范围限制的int,默认为1.0。这个参数的作用是作为一个阈值,当构造语料库的关键词集的时候,如果某个词的document frequence大于max_df,这个词不会被当作关键词。如果这个参数是float,则表示词出现的次数与语料库文档数的百分比,如果是int,则表示词出现的次数。如果参数中已经给定了vocabulary,则这个参数无效\n", "- 类似于max_df,不同之处在于如果某个词的document frequence小于min_df,则这个词不会被当作关键词\n", "- max_features:默认为None,可设为int,对所有关键词的term frequency进行降序排序,只取前max_features个作为关键词集" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Original TfidfVectorizer class is:\n", "class sklearn.feature_extraction.text.TfidfVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\\b\\w\\w+\\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)\n", "\n", "### Important parameters:\n", "TfidfVectorizer与CountVectorizer有很多相同的参数,下面只解释不同的参数\n", "- binary:默认为False,tf-idf中每个词的权值是tf*idf,如果binary设为True,所有出现的词的tf将置为1,TfidfVectorizer计算得到的tf与CountVectorizer得到的tf是一样的,就是词频,不是词频/该词所在文档的总词数。\n", "- norm:默认为'l2',可设为'l1'或None,计算得到tf-idf值后,如果norm='l2',则整行权值将归一化,即整行权值向量为单位向量,如果norm=None,则不会进行归一化。大多数情况下,使用归一化是有必要的。\n", "- use_idf:默认为True,权值是tf*idf,如果设为False,将不使用idf,就是只使用tf,相当于CountVectorizer了。\n", "- smooth_idf:idf平滑参数,默认为True,idf=ln((文档总数+1)/(包含该词的文档数+1))+1,如果设为False,idf=ln(文档总数/包含该词的文档数)+1\n", "- sublinear_tf:默认为False,如果设为True,则替换tf为1 + log(tf)。\n", "\n", "Reference: http://blog.csdn.net/du_qi/article/details/51564303" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model training\n", "Here we use naive bayes model to classify the news." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2 1]\n", "'Abuse of antibiotics is very common' => sci.med\n", "'OpenGL on the GPU is fast' => comp.graphics\n" ] } ], "source": [ "from sklearn.naive_bayes import MultinomialNB\n", "# start training\n", "clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)\n", "\n", "# Next, we will write two sentences to test the model.\n", "docs_new = ['Abuse of antibiotics is very common', 'OpenGL on the GPU is fast']\n", "X_new_counts = count_vect.transform(docs_new)\n", "X_new_tfidf = tfidf_transformer.transform(X_new_counts)\n", "\n", "# the following code will show the category pridicted by the model\n", "predicted = clf.predict(X_new_tfidf)\n", "print predicted\n", "for doc, category in zip(docs_new, predicted):\n", " print('%r => %s' % (doc, twenty_train.target_names[category]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the result, we can see that the model is not bad. But this is not the way we evaluate a model, we need to do the followings:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Evaluation \n", "We need to use the Accuracy, Precision, Recall, F1-measure to evaluate a model.\n", "\n", "### Original metrics.classification_report class\n", "sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2)\n", "Parameters:\n", "- y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values.\n", "- y_pred : 1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier.\n", "- target_names : list of strings Optional display names matching the labels (same order).\n", "Returns:\n", "- report: string\n", "- Text summary of the precision, recall, F1 score for each class.\n", "\n", "### About Precision, Recall, F1-score\n", "\n", "- 正确率(Precision) = 提取出的正确信息条数 / 提取出的信息条数 \n", "\n", "- 召回率(Recall) = 提取出的正确信息条数 / 样本中的信息条数 \n", "\n", "#### 两者(Precision, Recall)取值在0和1之间,数值越接近1,查准率或查全率就越高。 \n", "\n", "- F值(F1-score) = 正确率 * 召回率 * 2 / (正确率 + 召回率) (F 值即为正确率和召回率的调和平均值)\n", "\n", "#### E.g.\n", "不妨举这样一个例子:某池塘有1400条鲤鱼,300只虾,300只鳖。现在以捕鲤鱼为目的。撒一大网,逮着了700条鲤鱼,200只虾,100只鳖。那么,这些指标分别如下:\n", "\n", "正确率 = 700 / (700 + 200 + 100) = 70%\n", "\n", "召回率 = 700 / 1400 = 50%\n", "\n", "F值 = 70% * 50% * 2 / (70% + 50%) = 58.3%\n", "\n", "不妨看看如果把池子里的所有的鲤鱼、虾和鳖都一网打尽,这些指标又有何变化:\n", "\n", "正确率 = 1400 / (1400 + 300 + 300) = 70%\n", "\n", "召回率 = 1400 / 1400 = 100%\n", "\n", "F值 = 70% * 100% * 2 / (70% + 100%) = 82.35% \n", "\n", "- 由此可见,正确率是评估捕获的成果中目标成果所占得比例;召回率,顾名思义,就是从关注领域中,召回目标类别的比例;而F值,则是综合这二者指标的评估指标,用于综合反映整体的指标。\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", " alt.atheism 0.97 0.60 0.74 319\n", " comp.graphics 0.96 0.89 0.92 389\n", " sci.med 0.97 0.81 0.88 396\n", "soc.religion.christian 0.65 0.99 0.78 398\n", "\n", " avg / total 0.88 0.83 0.84 1502\n", "\n", "accurary\t0.834886817577\n" ] } ], "source": [ "from sklearn import metrics\n", "import numpy as np;\n", "\n", "# get the test data from test dataset\n", "twenty_test = fetch_20newsgroups(subset='test',categories=categories, shuffle=True, random_state=42)\n", "docs_test = twenty_test.data\n", "#vectorize test data\n", "X_test_counts = count_vect.transform(docs_test)\n", "#extract feature of test data\n", "X_test_tfidf = tfidf_transformer.transform(X_test_counts)\n", "# use the model to predict the category \n", "predicted = clf.predict(X_test_tfidf)\n", "\n", "#get the precision, recall, f1-score and support of this model\n", "print(metrics.classification_report(twenty_test.target, predicted,target_names=twenty_test.target_names))\n", "#get the accuracy of the model\n", "print(\"accurary\\t\"+str(np.mean(predicted == twenty_test.target)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the precision of some categories are close to 1, which means a high precision.\n", "In this model, the accuracy is 0.8348.\n", "\n", "#### Interpretation of the terms in the report:\n", "\n", "precision: Precision is the ratio of correctly predicted positive observations to the total predicted positive observations.\n", "Precision = TP/TP+FP\n", "\n", "Recall: (Sensitivity) - Recall is the ratio of correctly predicted positive observations to the all observations in actual class - yes.\n", "Recall = TP/TP+FN\n", "\n", "F1-score: The f1-score gives you the harmonic mean of precision and recall. The scores corresponding to every class will tell you the accuracy of the classifier in classifying the data points in that particular class compared to all other classes.\n", "F1 Score = 2*(Recall * Precision) / (Recall + Precision)\n", "\n", "Support: The support is the number of occurrences of each class in y_true.For instance, the support of alt.atheism category is 319. This means in the dataset, there are 319 records with the category of alt.atheism.\n", "\n", "Accuracy: Accuracy is the most intuitive performance measure and it is simply a ratio of correctly predicted observation to the total observations. One may think that, if we have high accuracy then our model is best. Yes, accuracy is a great measure but only when you have symmetric datasets where values of false positive and false negatives are almost same. Therefore, you have to look at other parameters to evaluate the performance of your model. For our model, we have got 0.803 which means our model is approx. 80% accurate.\n", "\n", "Reference: http://blog.exsilio.com/all/accuracy-precision-recall-f1-score-interpretation-of-performance-measures/\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }