{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "We'll start by using the [markovify](https://github.com/jsvine/markovify/) library to make some individual sentences in the style of Jane Austen. These will be the basis for generating a stream of synthetic documents." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import markovify\n", "import codecs\n", "import random\n", "\n", "# Markovify uses a single random generator -- notebooks using it will thus \n", "# only be reproducible if you set a random seed before each cell using markovify\n", "random.seed(0xbaff1ed)\n", "\n", "with codecs.open(\"data/austen.txt\", \"r\", \"cp1252\") as f:\n", " text = f.read()\n", "\n", "austen_model = markovify.Text(text, retain_original=False, state_size=3)\n", "\n", "for i in range(10):\n", " print(austen_model.make_short_sentence(200))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Constructing single sentences is interesting, but we'd really rather construct larger documents. Here we'll construct a series of documents that have, on average, five sentences." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "from scipy.stats import poisson\n", "import numpy as np\n", "\n", "def make_basic_documents(sentence_count=5, document_count=1, model=austen_model, seed=None):\n", " def shortsentence(ct):\n", " return \" \".join([model.make_short_sentence(200) for _ in range(ct + 1)])\n", "\n", " if seed is not None:\n", " # seed both the Python generator and the NumPy one used by SciPy\n", " random.seed(seed)\n", " np.random.seed(seed)\n", " \n", " return [shortsentence(ct) for ct in poisson.rvs(sentence_count, size=document_count)]\n", "\n", "for doc in make_basic_documents(5, 10, seed=0xdecaf):\n", " print(doc)\n", " print(\"\\n###\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're going to use the Austen model as the main basis for _legitimate messages_ in our sample data set. For the _spam messages_, we'll train two Markov models on positive and negative product reviews (taken from the [public-domain Amazon fine foods reviews dataset on Kaggle](https://www.kaggle.com/snap/amazon-fine-food-reviews/)). We'll combine the models from these sources in different proportions so that all words are _possible_ in certain kinds of messages but some words are _more likely_ in legitimate messages or in spam." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gzip\n", "\n", "def train_markov_gz(fn):\n", " \"\"\" trains a Markov model on gzipped text data \"\"\"\n", " with gzip.open(fn, \"rt\", encoding=\"utf-8\") as f:\n", " text = f.read()\n", " return markovify.Text(text, retain_original=False, state_size=3)\n", "\n", "negative_model = train_markov_gz(\"data/reviews-1.txt.gz\")\n", "positive_model = train_markov_gz(\"data/reviews-5-100k.txt.gz\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can combine these models with relative weights, but this yields somewhat unusual results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "legitimate_model = markovify.combine([austen_model, negative_model, positive_model], [196, 2, 2])\n", "spam_model = markovify.combine([austen_model, negative_model, positive_model], [3, 30, 40])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# seed both the Python generator and the NumPy one used by SciPy\n", "random.seed(0xc0ffee)\n", "np.random.seed(0xc0ffee)\n", "\n", "for s in make_basic_documents(5, 20, legitimate_model):\n", " print(s)\n", " print(\"\\n###\\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "random.seed(0xf00)\n", "np.random.seed(0xf00)\n", "\n", "for s in make_basic_documents(5, 20, spam_model):\n", " print(s)\n", " print(\"\\n###\\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then generate some example documents and save them to a file for use in the next notebook. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "\n", "pd.set_option(\"io.parquet.engine\", \"pyarrow\")\n", "\n", "random.seed(0xda7aba5e)\n", "np.random.seed(0xda7aba5e)\n", "\n", "df = pd.DataFrame(columns=[\"label\", \"text\"], dtype=np.dtype(str))\n", "\n", "mean_sentences_per_example = 5\n", "examples_per_class = 20000\n", "\n", "for (label, model) in [(\"legitimate\", legitimate_model), (\"spam\", spam_model)]:\n", " docs = [{\"label\" : label, \"text\" : txt} for txt in make_basic_documents(mean_sentences_per_example, examples_per_class, model)]\n", " df = pd.concat([df, pd.DataFrame(docs)])\n", "\n", "df[\"text\"] = df[\"text\"].astype(\"str\")\n", "df[\"label\"] = df[\"label\"].astype(\"category\")\n", "df.reset_index().to_parquet(\"data/training.parquet\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's go to [the next notebook](01-vectors-and-visualization.ipynb) now!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.6", "language": "python", "name": "jupyter" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }