{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#
Individual Project. Predicting the rating of the drug based on the review
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Greetings, \n", "\n", "Machine learning has permeated nearly all fields and disciplines of study. One hot topic is using natural language processing and sentiment analysis to identify, extract, and make use of subjective information. The UCI ML Drug Review dataset provides patient reviews on specific drugs along with related conditions and a 10-star patient rating system reflecting overall patient satisfaction. The data was obtained by crawling online pharmaceutical review sites. \n", "\n", "This data was published in a study on sentiment analysis of drug experience over multiple facets, ex. sentiments learned on specific aspects such as effectiveness and side effects (see the acknowledgments section to learn more).\n", "\n", "The dataset was originally published on the UCI Machine Learning repository: https://archive.ics.uci.edu/ml/datasets/Drug+Review+Dataset+%28Drugs.com%29\n", "\n", "Citation: \n", "\n", "Felix Gräßer, Surya Kallumadi, Hagen Malberg, and Sebastian Zaunseder. 2018. Aspect-Based Sentiment Analysis of Drug Reviews Applying Cross-Domain and Cross-Data Learning. In Proceedings of the 2018 International Conference on Digital Health (DH '18). ACM, New York, NY, USA, 121-125.\n", "\n", "You can also download it easily from kagle dataset:\n", "https://www.kaggle.com/jessicali9530/kuc-hackathon-winter-2018" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To simplify the project evaluation I'll follow the proposed plan:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Project outline\n", "1. Feature and data explanation\n", "2. EDA, VDA, Insights and found dependencies\n", "3. Metrics selection \n", "4. Data preprocessing and model selection\n", "5. Cross-validation and adjustment of model hyperparameters\n", "6. Creation of new features and description of this process\n", "7. Plotting training and validation curves\n", "8. Prediction for test or hold-out samples \n", "9. Conclusions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Feature and data explanation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, load the dataset " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import os\n", "import warnings\n", "warnings.filterwarnings('ignore')\n", "\n", "random_state = 42\n", "\n", "PATH_TO_DATA = 'C:/Projects/Python/ODS_ml_course/indiv_proj/'\n", "df_train = pd.read_csv(os.path.join(PATH_TO_DATA,\n", " 'drugsComTrain_raw.csv'), parse_dates=[\"date\"])\n", "df_test = pd.read_csv(os.path.join(PATH_TO_DATA,\n", " 'drugsComTest_raw.csv'), parse_dates=[\"date\"])\n", "df_train.drop('uniqueID', axis=1, inplace=True)\n", "df_test.drop('uniqueID', axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at our data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test.info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The columns included in this dataset are:\n", "\n", "1. drugName (categorical): name of drug \n", "2. condition (categorical): name of condition \n", "3. review (text): patient review \n", "4. rating (numerical): 10 star patient rating \n", "5. date (date): date of review entry \n", "6. usefulCount (numerical): number of users who found review useful\n", "\n", "The structure of the data is that a patient purchases a drug that meets his condition and writes a review and rating for the drug he/she purchased. Afterwards, if the others read that review and find it helpful, they will click usefulCount, which will add 1 for the variable.\n", "\n", "The data is split into a train (75%) and test (25%) partition.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The initial tasks were\n", "\n", "1. Classification: Can you predict the patient's condition based on the review?\n", "2. Regression: Can you predict the rating of the drug based on the review?\n", "3. Sentiment analysis: What elements of a review make it more helpful to others? Which patients tend to have more negative reviews? Can you determine if a review is positive, neutral, or negative?\n", "4. Data visualizations: What kind of drugs are there? What sorts of conditions do these patients have?\n", "\n", "The rating variable is ordinal. To be sure, it would be a lot of paininthea$$. Let it be sentiment classification of the reviews. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train['target'] = df_train['rating'].apply(lambda x: 0 if x < 5 else 1 if 4 < x < 8 else 2)\n", "df_test['target'] = df_test['rating'].apply(lambda x: 0 if x < 5 else 1 if 4 < x < 8 else 2)\n", "\n", "df_train.drop('rating', axis=1, inplace=True)\n", "df_test.drop('rating', axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. EDA, VDA, Insights and found dependencies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's time to load several libraries necessary for data analysis:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "import seaborn as sns" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First of all, as you may have noticed, there are several NaNs in the 'condition' column in both datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train[pd.isnull(df_train['condition'])].head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train[pd.isnull(df_train['condition'])].shape, df_test[pd.isnull(df_test['condition'])].shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, the amount is low. And it looks like these NaNs are missing values. We'll delete them later." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fine. Take a look at our features." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train.describe(include=['object','bool'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test.describe(include=['object','bool'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As for condition column" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_condition_train = df_train['condition'].value_counts()\n", "vc_condition_test = df_test['condition'].value_counts()\n", "\n", "print(vc_condition_train[0:25])\n", "print(vc_condition_test[0:25])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_condition_train[vc_condition_train < 10].shape, vc_condition_test[vc_condition_test < 10].shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_condition_test[0:25].index.isin(vc_condition_train[0:25].index).all()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Top conditions are almost the same for both datasets. More than half of the conditions occur less than 10 times." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since condition is related to drug name " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "conditions_drugs_train = df_train.groupby(['condition'])['drugName'].nunique().sort_values(ascending=False)\n", "conditions_drugs_test = df_test.groupby(['condition'])['drugName'].nunique().sort_values(ascending=False)\n", "\n", "print(conditions_drugs_train[0:25])\n", "print(conditions_drugs_test[0:25])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Top15 conditions per drugs in both datasets are the same. It is worth noting the Not Listed / Othe option, so the NaNs truly was missing. It is also should be noted the crawler errors such as '3 users found this comment helpful.'. We'll fix it later at preprocessing stage. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As for drugName column" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_drug_train = df_train['drugName'].value_counts()\n", "vc_drug_test = df_test['drugName'].value_counts()\n", "\n", "print(vc_drug_train[0:25])\n", "print(vc_drug_test[0:25])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_drug_test[0:15].index.isin(vc_drug_train[0:15].index).all()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "vc_drug_train[vc_drug_train < 10].shape, vc_drug_test[vc_drug_test < 10].shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, almost identically for conditions. Top15 drugs are the same. More than half of the drugs occur less than 10 times." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_train['condition'].\\\n", " iloc[df_train['condition'].astype(str).\\\n", " apply(lambda str: len(str.split())).sort_values(ascending = False).index[0:10]])\n", "print(df_train['condition'][24040:24041].apply(lambda str: len(str.split())))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next part is reviews. let's look at a few" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train['review'][17]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test['review'][42]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train['review'][100000] # what a guy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test['review'][53766 - 1] " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train['review'][161297 - 1] " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first thing that catches the eye is ''' for apostrophe. Next, some formating commands such as '\\r', '\\n'. We'll also delete these." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next one is usefulcount feature" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train['usefulCount'].describe()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test['usefulCount'].describe()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 6))\n", "sns.distplot(df_train['usefulCount'], ax=axes[0], norm_hist=True);\n", "axes[0].set(xlabel='usefulCount_train', ylabel='count');\n", "sns.distplot(df_test['usefulCount'], ax=axes[1], norm_hist=True);\n", "axes[1].set(xlabel='usefulCount_test', ylabel='count');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(12,9))\n", "\n", "sns.distplot(df_train['usefulCount'], color='blue', kde=False, norm_hist=True)\n", "sns.distplot(df_test['usefulCount'], color='green', kde=False, norm_hist=True)\n", "\n", "plt.xlabel('usefulCount')\n", "plt.ylabel('Counts')\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The destributions looks like very similar for train and test datasets. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As for the distribution itself, the problem in 'usefulCount' is that the distribution is skewed with long tails. The std is 36 when the mean is 27-28. The 'usefulCounts' is related to condition and drug. For common condition there are much more people that read the reviews and obviously much higher the 'usefulCounts'. We'll try to handle it later." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Date:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(np.min(df_train['date']),np.max(df_train['date']))\n", "print(np.min(df_test['date']),np.max(df_test['date']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok. Time for target." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_train['target'].value_counts())\n", "print(df_train['target'].value_counts(normalize=True))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_test['target'].value_counts())\n", "print(df_test['target'].value_counts(normalize=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Classes are imbalanced." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now it's safe to say that the test and train samples were derived from a single distribution. Let's concatenate them and continue to analyse." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all = pd.concat([df_train,df_test]).reset_index(drop=True)\n", "mask = df_all.index < df_train.shape[0]\n", "df_all['istrain'] = False\n", "df_all['istrain'][mask] = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sorting by the time" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all.sort_values(by='date',\n", " ascending=True).head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create the main time features" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['year'] = df_all['date'].dt.year\n", "df_all['month'] = df_all['date'].dt.month\n", "df_all['dom'] = df_all['date'].dt.day\n", "df_all['dow'] = df_all['date'].dt.weekday\n", "df_all.drop('date', axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Graphics in SVG format are more sharp and legible\n", "%config InlineBackend.figure_format = 'svg'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_, axes = plt.subplots(nrows=1, ncols=2, figsize=(11, 4))\n", "sns.countplot(x='year', hue='target', ax=axes[0], data=df_all[df_all['istrain']]);\n", "axes[0].set(xlabel='year_train', ylabel='count');\n", "sns.countplot(x='year', hue='target', ax=axes[1], data=df_all[df_all['istrain'] == False]);\n", "axes[1].set(xlabel='year_test', ylabel='count');" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.countplot(x='year', hue='target', data=df_all);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Intresting. There can be seen 3 groups of years: \n", "1. 2008\n", "2. 2009-2014\n", "3. 2015-2017\n", "\n", "Of particular importance is the fact that the amount of negative reviews has increased significantly from 2015. The amount of negative and neutral reviews before 2015 was almost identical." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(7,4))\n", "sns.countplot(x='month', hue='target', data=df_all);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The month would be useless" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(9,4))\n", "sns.countplot(x='dom', hue='target', data=df_all);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Nothing intresting (there are only 7 month in the year with 31 day)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(7,4))\n", "sns.countplot(x='dow', hue='target', data=df_all);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The same. Time features on their own (excluding year) won't help. But we'll check this out." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Back to usefulCounts" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all.groupby(['target'])['usefulCount'].describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Expected. Positive reviews often get more 'usefulCount's than negative reviews. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "del vc_drug_test, vc_drug_train, \\\n", " vc_condition_train, vc_condition_test, conditions_drugs_test, conditions_drugs_train" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To sum up, \n", "1. There are ~1300 missed values in the condition column in both datasets. We've also found crawler error in this column.\n", "2. Datasets were derived from a single distribution, which was proved by time features, usefulCounts and the target.\n", "3. We should bear in the mind that target classes are unbalanced when when we choose the metric and customize the model.\n", "4. Almost all time features are useless (excluding the year). We examined 3 groups of years: 1. 2008, 2. 2009-2014, 3. 2015-2017. Of particular importance is the fact that the amount of negative reviews has increased significantly from 2015. The amount of negative and neutral reviews before 2015 was almost identical.\n", "5. There are several things we have to correct in the reviews such as the apostrophe problem and formatting commands.\n", "6. The usefulCount feature looks useful. But there is the problem with amount of people, which need some 'drugname' in one condition or another." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. Data preprocessing, metric and model selection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's start with condition since this column has missing values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all[pd.isnull(df_all['condition'])].head(10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(pd.isnull(df_all['condition']).sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you remember, there is an 'Not Listed / Othe' condition. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_all[df_all['condition'] == 'Not Listed / Othe'].shape)\n", "df_all[df_all['condition'] == 'Not Listed / Othe'].head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are also some crawler errors in condition column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['condition'].value_counts().tail(20)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "'users found this comment helpful.' -- let's count the amount of such errors with regular expressions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "df_all['condition'].str.contains(re.compile('users found this comment helpful')).sum()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, the amount of errors\\missing values in the condition column is quiet small. And, according to the reviews, it can be seen that these people have different conditions. It's better to delete them all." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mask = (pd.isnull(df_all['condition'])) | (df_all['condition'] == 'Not Listed / Othe') |\\\n", " (df_all['condition'].str.contains(re.compile('users found this comment helpful')))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mask.sum()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all = df_all.drop(df_all[mask].index).reset_index(drop=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For condition feature we'll apply CountVectorizer, which can by default do all the nessesary staff such as lowercased strings, excluding punctuation marks and ')(/', etc. All we have to know the number of words in the longest 'condition' for choosing the ngram range." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('The number of words in the longest condition:', df_all['condition'].astype(str).\\\n", " apply(lambda str: len(str.split())).sort_values(ascending = False).iloc[0])\n", "print(df_all['condition'].iloc[df_all['condition'].astype(str).\\\n", " apply(lambda str: len(str.split())).sort_values(ascending = False).index[0]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "drugName. Let's one more time take a look at the values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['drugName'].head(15)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['drugName'].tail(15)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As for condition, let's find the longest drug name." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(df_all['drugName'].iloc[df_all['drugName'].astype(str).\\\n", " apply(lambda str: len(str.split())).sort_values(ascending = False).index[1]])\n", "print(df_all['drugName'].iloc[df_all['drugName'].astype(str).\\\n", " apply(lambda str: len(str.split())).sort_values(ascending = False).index[4]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok, max ngram range would be 10." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Time for reviews." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create WordClouds." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !pip isntall WordCloud\n", "# !conda install -c conda-forge wordcloud \n", "from wordcloud import WordCloud, STOPWORDS\n", "\n", "def plot_wordcloud(data, title):\n", " wordcloud = WordCloud(background_color='black', stopwords = STOPWORDS, max_words = 100, max_font_size = 100, \n", " random_state = 42, width=1280, height=720)\n", " wordcloud.generate(str(data))\n", " \n", " plt.figure(figsize=(9, 6))\n", "\n", " plt.imshow(wordcloud, interpolation=\"bilinear\");\n", "\n", " plt.title(title, fontdict={'size': 20, 'color': 'black', \n", " 'verticalalignment': 'bottom'})\n", " \n", " plt.axis('off')\n", " plt.tight_layout() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In computing, stop words are words which are filtered out before or after processing of natural language data (text). Though \"stop words\" usually refers to the most common words in a language, there is no single universal list of stop words used by all natural language processing tools, and indeed not all tools even use such a list. Some tools specifically avoid removing these stop words to support phrase search. \n", "Citation: https://en.wikipedia.org/wiki/Stop_words" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_wordcloud(df_all['review'], 'Wordcloud for both datasets')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's group_by reviews by the target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_wordcloud(df_all['review'][df_all['target'] == 0], 'Wordcloud for negative reviews')\n", "plot_wordcloud(df_all['review'][df_all['target'] == 1], 'Wordcloud for neutral reviews')\n", "plot_wordcloud(df_all['review'][df_all['target'] == 2], 'Wordcloud for positive reviews')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, it looks like single words won't help us much. Just several words such as horrible, terrible. \n", "\n", "Can you see the difference?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we remember, there are problem with apostrophes. We'll replace them with space symbol" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['review'] = df_all['review'].str.replace(''', \" \", regex=False);" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all['review'][17]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fine. We'll use TfidfVectorizer for reviews with stopwords. We'll try 3 ngram_ranges: (1,1), (1,3), (2,3). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As for time feature, we'll first include all of them with propper transformation (ohe for years; sin_cos trasnform for days_of_week, days_of_month, month)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def add_time_features(df):\n", " df['dow_sin'] = df['dow'].apply(lambda ts: np.sin(2*np.pi*ts/7.))\n", " df['dow_cos'] = df['dow'].apply(lambda ts: np.cos(2*np.pi*ts/7.))\n", " \n", " df['dom_sin'] = df['dom'].apply(lambda ts: np.sin(2*np.pi*ts/31.))\n", " df['dom_cos'] = df['dom'].apply(lambda ts: np.cos(2*np.pi*ts/31.))\n", " \n", " df['month_sin'] = df['month'].apply(lambda ts: np.sin(2*np.pi*ts/12.))\n", " df['month_cos'] = df['month'].apply(lambda ts: np.cos(2*np.pi*ts/12.))\n", "\n", " df.drop(['month', 'dom', 'dow'], axis=1, inplace=True)\n", "\n", " return df" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_all = add_time_features(df_all)\n", "df_all.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initially, we'll just scale 'usefulCount' with StandardScaler " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Model selection.\n", "\n", "Taking into account \n", "1. the size of the task (the amount of features after preprocessing -- a waste of time using RandomForestClassifier, GBM, etc)\n", "2. the size of the corpus of the reviews (~160k reviews in the train dataset)\n", "3. the presense of numerical features (can't simply use something like NaiveBayesClassifier)\n", "4. possibilities of the laptop (sad)\n", "5. instructions not to dive deep (NN, specific methods for specific tasks, which sentimental analysis is)\n", "\n", "I decided to work with multinomial logistic regression. We worked with it several times. I think there's no need to describe the pros and cons one more time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Metric selection.\n", "\n", "Well, since there is multinomial classification + imbalanced classes:\n", "\n", "0. negative -- 0.251032\n", "1. neutral -- 0.147305\n", "2. positive -- 0.601663\n", "\n", "So, simple accuracy isn't good idea. In this case better choose precision\\recall or even 'weighted' F1-score. Here's the description from sklearn documentation https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score \n", "'weighted':\n", "Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok. Let's continue" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.base import BaseEstimator, TransformerMixin\n", "from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n", "from sklearn.preprocessing import StandardScaler, OneHotEncoder\n", "from sklearn.metrics import confusion_matrix, f1_score, accuracy_score, precision_score, recall_score\n", "\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.linear_model import SGDClassifier\n", "\n", "from sklearn.compose import ColumnTransformer\n", "from sklearn.pipeline import Pipeline, FeatureUnion\n", "from sklearn.impute import SimpleImputer\n", "from sklearn.model_selection import train_test_split, GridSearchCV\n", "\n", "class LemmaTokenizer(object):\n", " def __init__(self):\n", " self.wnl = WordNetLemmatizer()\n", " def __call__(self, doc):\n", " return [self.wnl.lemmatize(t) for t in word_tokenize(doc)]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train = df_all[df_all['istrain']]\n", "X_test = df_all[df_all['istrain'] == False]\n", "y_train = df_all['target'][df_all['istrain']]\n", "y_test = df_all['target'][df_all['istrain'] == False]\n", "\n", "X_train.drop(['istrain','target'], axis=1, inplace=True)\n", "X_test.drop(['istrain','target'], axis=1, inplace=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Pipeline. Time is running out, deadline is close, so this is the best working variant I can do for now :)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class TextSelector(BaseEstimator, TransformerMixin):\n", " def __init__(self, key):\n", " self.key = key\n", "\n", " def fit(self, X, y=None):\n", " return self\n", "\n", " def transform(self, X):\n", " return X[self.key]\n", "\n", "class NumberSelector(BaseEstimator, TransformerMixin):\n", " def __init__(self, key):\n", " self.key = key\n", "\n", " def fit(self, X, y=None):\n", " return self\n", "\n", " def transform(self, X):\n", " return X[[self.key]]\n", " \n", "uc_transformer = Pipeline(steps = [\n", " ('selector_uc', NumberSelector(key = 'usefulCount')),\n", " ('scaler_uc', StandardScaler())\n", " ])\n", "\n", "dow_sin_transformer = Pipeline(steps = [\n", " ('selector_dow_sin', NumberSelector(key = 'dow_sin')),\n", " ('scaler_dow_sin', StandardScaler())\n", " ])\n", "dow_cos_transformer = Pipeline(steps = [\n", " ('selector_dow_cos', NumberSelector(key = 'dow_cos')),\n", " ('scaler_dow_cos', StandardScaler())\n", " ])\n", "\n", "dom_sin_transformer = Pipeline(steps = [\n", " ('selector_dom_sin', NumberSelector(key = 'dom_sin')),\n", " ('scaler_dom_csin', StandardScaler())\n", " ])\n", "dom_cos_transformer = Pipeline(steps = [\n", " ('selector_dom_cos', NumberSelector(key = 'dom_cos')),\n", " ('scaler_dom_cos', StandardScaler())\n", " ])\n", "\n", "month_sin_transformer = Pipeline(steps = [\n", " ('selector_month_sin', NumberSelector(key = 'month_sin')),\n", " ('scaler_month_sin', StandardScaler())\n", " ])\n", "month_cos_transformer = Pipeline(steps = [\n", " ('selector_month_cos', NumberSelector(key = 'month_cos')),\n", " ('scaler_month_cos', StandardScaler())\n", " ])\n", "\n", "cond_tranformer = Pipeline(steps = [\n", " ('selector_cond', TextSelector(key='condition')),\n", " ('cv_cond', CountVectorizer(stop_words='english', ngram_range=(1,8)))\n", " ])\n", "\n", "drug_tranformer = Pipeline(steps = [\n", " ('selector_drug', TextSelector(key='drugName')),\n", " ('cv_cond', CountVectorizer(stop_words='english', ngram_range=(1,10)))\n", " ])\n", "\n", "y_transformer = Pipeline(steps = [\n", " ('selector_y', NumberSelector(key='year')),\n", " ('ohe_y', OneHotEncoder(handle_unknown='ignore'))\n", " ])\n", "\n", "rev_tranformer = Pipeline(steps = [\n", " ('selector_rev', TextSelector(key='review')),\n", " ('tfidf_rev',TfidfVectorizer(stop_words='english', \n", " ngram_range=(1,3),\n", " max_features = 100000\n", " )\n", " )\n", " ])\n", "\n", "preprocessor = FeatureUnion([\n", " ('usefulCount', uc_transformer),\n", " ('dow_sin', dow_sin_transformer), \n", " ('dow_cos', dow_cos_transformer), \n", " ('dom_sin', dom_sin_transformer), \n", " ('dom_cos', dom_sin_transformer), \n", " ('month_sin', month_sin_transformer), \n", " ('month_cos', month_cos_transformer), \n", " ('condition', cond_tranformer),\n", " ('drugName', drug_tranformer),\n", " ('year', y_transformer),\n", " ('review', rev_tranformer)\n", " ])\n", "\n", "\n", "mlog = Pipeline(steps = [('preprocessor', preprocessor),\n", " ('logreg', LogisticRegression(random_state=42, solver = 'lbfgs', multi_class='multinomial'))])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try to predict" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "warnings.filterwarnings('ignore')\n", "\n", "mlog.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Weighted f1 score for test dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred = mlog.predict(X_test)\n", "print('Weighted f1 score for test dataset',f1_score(y_test, y_pred, average='weighted'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also compute accuracy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Accuracy score', accuracy_score(y_test, y_pred))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And confusion_matrix: https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import itertools\n", "\n", "def plot_confusion_matrix(cm, classes,\n", " normalize=False,\n", " title='Confusion matrix',\n", " cmap=plt.cm.Blues):\n", " \"\"\"\n", " This function prints and plots the confusion matrix.\n", " Normalization can be applied by setting `normalize=True`.\n", " \"\"\"\n", " if normalize:\n", " cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n", " print(\"Normalized confusion matrix\")\n", " else:\n", " print('Confusion matrix, without normalization')\n", "\n", " print(cm)\n", "\n", " plt.imshow(cm, interpolation='nearest', cmap=cmap)\n", " plt.title(title)\n", " plt.colorbar()\n", " tick_marks = np.arange(len(classes))\n", " plt.xticks(tick_marks, classes, rotation=45)\n", " plt.yticks(tick_marks, classes)\n", "\n", " fmt = '.2f' if normalize else 'd'\n", " thresh = cm.max() / 2.\n", " for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n", " plt.text(j, i, format(cm[i, j], fmt),\n", " horizontalalignment=\"center\",\n", " color=\"white\" if cm[i, j] > thresh else \"black\")\n", "\n", " plt.ylabel('True label')\n", " plt.xlabel('Predicted label')\n", " plt.tight_layout()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_confusion_matrix(confusion_matrix(y_test, y_pred),classes = [0, 1, 2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, i suppose for default model it's ok. Time for CV" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Creation of new features and description of this process\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok, now we'll try to drop features that might have no impact. As was stated, all time features except year should be removed. Moreover, instead of the year column we will create 3 bool features for 3 groups described above.\n", "\n", "We also need to handle the problem with usefulCounts. The simple idea is to devide by the amount of conditions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def add_new_features(df):\n", " df['year1'] = df['year'] == 2008\n", " df['year2'] = (df['year'] < 2015) & (df['year'] > 2008)\n", " df['year3'] = (2014 < df['year']) & (df['year'] < 2018)\n", "\n", " df.drop('year', axis=1, inplace=True)\n", "\n", " return df\n", "\n", "df_all.drop(['dow_sin', 'dow_cos', 'dom_sin', 'dom_cos', 'month_sin', 'month_cos'], axis = 1, inplace = True)\n", "df_all = add_new_features(df_all)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train = df_all[df_all['istrain']]\n", "X_test = df_all[df_all['istrain'] == False]\n", "y_train = df_all['target'][df_all['istrain']]\n", "y_test = df_all['target'][df_all['istrain'] == False]\n", "\n", "X_train.drop(['istrain','target'], axis=1, inplace=True)\n", "X_test.drop(['istrain','target'], axis=1, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y1_transformer = Pipeline(steps = [\n", " ('selector_y1', NumberSelector(key='year1'))\n", " ])\n", "\n", "y2_transformer = Pipeline(steps = [\n", " ('selector_y2', NumberSelector(key='year2'))\n", " ])\n", "\n", "y3_transformer = Pipeline(steps = [\n", " ('selector_y3', NumberSelector(key='year3'))\n", " ])\n", "\n", "preprocessor2 = FeatureUnion([\n", " ('usefulCount', uc_transformer),\n", " ('condition', cond_tranformer),\n", " ('drugName', drug_tranformer),\n", " ('year1', y1_transformer),\n", " ('year2', y2_transformer),\n", " ('year3', y3_transformer),\n", " ('review', rev_tranformer)\n", " ])\n", "\n", "\n", "mlog2 = Pipeline(steps = [('preprocessor', preprocessor2),\n", " ('logreg', LogisticRegression(random_state=42, solver = 'lbfgs', multi_class='multinomial', ))])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check this out" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "mlog2.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred2 = mlog2.predict(X_test)\n", "print('Weighted f1 score for test dataset',f1_score(y_test, y_pred2, average='weighted'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Accuracy score', accuracy_score(y_test, y_pred2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, despite the imbalanced classes, accuracy looks fine." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_confusion_matrix(confusion_matrix(y_test, y_pred2),classes = [0, 1, 2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Well, looks better!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 5. Cross-validation and adjustment of model hyperparameters\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since our classes imbalancedand the train\\test came from the same distribution, we'll perform StratifiedKFold with 3 splits" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import StratifiedKFold" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cv = StratifiedKFold(n_splits = 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Time for tuning hyperparameters. \n", "\n", "Logreg, if you remember, has two methods for regularization: l1, l2 regularization. But there are only 3 solvers which support multiclass problems and all of them works only with l2. I think, there's no need to describe what is l2 regularization.\n", "\n", "As for Tfidf, as was stated above, we'll try several ngram_ranges. Let the max amount of features stay constant (100k)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "param_grid = {\n", " 'preprocessor__review__tfidf_rev__ngram_range': [(1,1), (1,3), (2,3)],\n", " 'logreg__C': [0.1, 0.5, 1.0, 2],\n", "}\n", "\n", "grid_search = GridSearchCV(mlog2, param_grid, cv=cv, scoring = 'f1_weighted',verbose = 20, n_jobs = -1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grid_search.fit(X_train,y_train)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('grid search params:', grid_search.best_params_)\n", "print('grid search score:', grid_search.best_score_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 6. Plotting training and validation curves\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "No time for this biblethumpbiblethumpbiblethump" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 7. Prediction for test or hold-out samples \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Deadline is over but I stack here: mlog2.set_params(grid_search.best_params_) isn't working..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mlog2.set_params(logreg__C = 2, preprocessor__review__tfidf_rev__ngram_range = (1, 3))\n", "mlog2.fit(X_train, y_train)\n", "y_pred_final = mlog2.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Weighted f1 score for test dataset', f1_score(y_test, y_pred_final, average='weighted'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Accuracy score', accuracy_score(y_test, y_pred_final))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Score decreased. Well, nothing left to do. Better CV (more splits, more params values) should improve the results, because the are derived from the same distribution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 8. Conclusions\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this project we consider the sentimental classification problem based on the UCI ML Drug Review dataset. We tryed to apply as much as possible of what we have learned for last 3 month. Nevertheless, the results are shown. Not so good, but... c'mon, it's just a begining. Simple pipeline with a lil bit feauture engineering gave us the above result. As stated in the evaluation criteria I've to describe the value of the project. Well, for me it's huge. Filling out the forms is no match for your own project. I hope you share my opinion.\n", "\n", "Possible cases of application? No one. But as a baseline for me to learn something about sentimental analysis is fine.\n", "\n", "And, of course, there are a lot of ways to improving: NLP tecniques, deep learning, etc. But first of all I have to finnish the task.\n", "\n", "Thanks for reading!\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }