{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Suppose the spam generator becomes more intelligent and begins producing prose which looks \"more legitimate\" than before. \n", "\n", "There are numerous ways the prose could become more like legitimate text. For the purpose of this notebook we will simply force the spam data to *drift* by adding the first few lines of Pride and Prejudice to the start of the spam documents in our testing set. We will then see how the trained model responds. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import os.path\n", "\n", "df = pd.read_parquet(os.path.join(\"data\", \"training.parquet\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We split the data into training and testing sets, as in the modelling notebooks. We use the `random_state` parameter to ensure that the data is split in the same way as it was when we fit the model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn import model_selection\n", "\n", "df_train, df_test = model_selection.train_test_split(df, random_state=43)\n", "df_test_spam = df_test[df_test.label == 'spam'].copy() #filter the spam documents" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def add_text(doc, adds):\n", " \"\"\"\n", " takes in a string _doc_ and\n", " appends text _adds_ to the start\n", " \"\"\"\n", " \n", " return adds + doc" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pride_pred = '''It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered the rightful property of some one or other of their daughters.“My dear Mr. Bennet,” said his lady to him one day, “have you heard that Netherfield Park is let at last?” Mr. Bennet replied that he had not. “But it is,” returned she; “for Mrs. Long has just been here, and she told me all about it.” Mr. Bennet made no answer. “Do you not want to know who has taken it?” cried his wife impatiently.'''" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# appending text to the start of the spam\n", "df_test_spam[\"text\"] = df_test_spam.text.apply(add_text, adds=pride_pred)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.set_option('display.max_colwidth', -1) #ensures that all the text is visible\n", "df_test_spam.sample(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now pass this \"drifted\" data through the pipeline we created: we compute feature vectors, and we make spam/legitimate classifications using the model we trained. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.pipeline import Pipeline\n", "import pickle, os\n", "\n", "## loading in feature vectors pipeline\n", "filename = 'feature_pipeline.sav'\n", "feat_pipeline = pickle.load(open(filename, 'rb'))\n", "\n", "## loading model\n", "filename = 'model.sav'\n", "model = pickle.load(open(filename, 'rb'))\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pipeline = Pipeline([\n", " ('features',feat_pipeline),\n", " ('model',model)\n", "])\n", "\n", "## we need to fit the model, using the un-drifted data, as we did in the previous notebooks. \n", "\n", "pipeline.fit(df_train[\"text\"], df_train[\"label\"])\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## we can then go on and make predictions for the drifted spam, using the fitted pipeline above. \n", "# predict test instances\n", "y_preds = pipeline.predict(df_test_spam[\"text\"])\n", "print(y_preds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "np.array(np.unique(y_preds, return_counts = True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model is worse at classifying drifted data, since this is not what we trained the model on. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "The two models perform very similarly on the \"drifted\" data in this notebook. Consider alternative types of data drift and see how the models perform: \n", "1. What happens when fewer words from Pride and Prejudice are appended to the spam? \n", "2. How about using a completely different excerpt of Austen? \n", "3. How do the models perform when generic text (neither Austen nor food reviews) is appended to the spam? " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 2 }