{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Language Model and Spinoza's Ethics\n", "\n", "In this post I will show how to build a language model for text generation using deep learning techniques.\n", "\n", "\n", "## Introduction\n", "\n", "Though natural language, in principle, have formal structures and grammar, in practice it is full of ambiguities. Modeling it using examples and modeling is an interesting alternative. The definition of a (statistical) language model given by [Ref.1](https://en.wikipedia.org/wiki/Language_model) is:\n", "\n", "> A statistical language model is a probability distribution over sequences of words. Given such a sequence it assigns a probability to the whole sequence.\n", "\n", "Or equivalently, given a sequence $\\{w_1,...,w_n\\}$ of length $m$, the model assigns a probability \n", "\n", "$$P(w_1,...,w_n)$$\n", "\n", "to the whole sequence. In particular, a neural language model can predict the probability of the next word in a sentence (see [Ref.2](https://machinelearningmastery.com/develop-character-based-neural-language-model-keras/) for more details). \n", "\n", "The use of neural networks has become one of the main approaches to language modeling. Three properties can describe this neural language modeling (NLM) approach succinctly [Ref. 3](http://www.jmlr.org/papers/v3/bengio03a.html): \n", "\n", "> We first associate words in the vocabulary with a distributed word feature vector, then express the joint probability function of word sequences in terms of the feature vectors of these words in the sequence and then learn simultaneously the word feature vector and the parameters of the probability function. \n", "\n", "In this project I used Spinoza's *Ethics* to build a NLM." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## *The Ethics*\n", "\n", "From [Ref.4](https://en.wikipedia.org/wiki/Ethics_(Spinoza)):\n", "\n", "> Ethics, Demonstrated in Geometrical Order, usually known as the Ethics, is a philosophical treatise written by Benedict de Spinoza. \n", "\n", "The article goes on to say that:\n", "\n", "> The book is perhaps the most ambitious attempt to apply the method of Euclid in philosophy. Spinoza puts forward a small number of definitions and axioms from which he attempts to derive hundreds of propositions and corollaries [...]\n", "\n", "The book has structure shown below. We see that it is set out in geometrical form paralleling the \"canonical example of a rigorous structure of argument producing unquestionable results: the example being the geometry of Euclid\" (see [link](https://timlshort.com/2010/06/21/spinozas-style-of-argument-in-ethics-i/))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> # PART I. CONCERNING GOD. \n", "## DEFINITIONS.\n", "I. By that which is self—caused, I mean that of which the essence involves existence, or that of which the nature is only conceivable as existent.\n", "\n", "> II. A thing is called finite after its kind, when it can be limited by another thing of the same nature; for instance, a body is called finite because we always conceive another greater body. So, also, a thought is limited by another thought, but a body is not limited by thought, nor a thought by body.\n", "\n", "> III. By substance, I mean that which is in itself, and is conceived through itself: in other words, that of which a conception can be formed independently of any other conception.\n", "\n", "> IV. By attribute, I mean that which the intellect perceives as constituting the essence of substance.”\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/marcotavora/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n", " from ._conv import register_converters as _register_converters\n", "Using TensorFlow backend.\n" ] } ], "source": [ "from numpy import array\n", "from pickle import dump\n", "from keras.utils import to_categorical\n", "from keras.utils.vis_utils import plot_model\n", "from keras.models import Sequential\n", "from keras.layers import Dense\n", "from keras.layers import LSTM\n", "\n", "from IPython.core.interactiveshell import InteractiveShell\n", "InteractiveShell.ast_node_interactivity = \"all\" " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### We first write a function to load texts\n", "\n", "The steps of the function below are:\n", "- Opens the file 'ethics.txt'\n", "- Reads it into a string\n", "- Closes it" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "file = \"ethics.txt\"" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def load_txt(file):\n", " f = open(file, 'r')\n", " text = f.read()\n", " f.close()\n", " return text" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Priting out part of the string\n", "\n", "We see it contains lots of new line characters." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'PART I. CONCERNING GOD.\\n\\nDEFINITIONS.\\n\\n\\nI. By that which is self--caused, I mean that of which the\\n'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "load_txt(file)[0:100]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading and splitting the string" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "raw = load_txt('ethics.txt')" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'PART I. CONCERNING GOD.\\n\\nDEFINITIONS.\\n\\n\\nI. By that which is self--caused, I mean that of which the\\n'" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw[0:100]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preprocessing\n", "\n", "The first step is tokenization. With the tokens we will be able to train our model. Some other actions are:\n", "- Exclude stopwords (common words, adding no meaning such as for example, \"I\", \"am\")\n", "- Take out punctuation and spaces\n", "- Convert text to lower case\n", "- Split words (on white spaces)\n", "- Elimitate `--`,`\"`, numbers and brackets\n", "- Dropping non-alphabetic words\n", "- Stemming\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "import nltk\n", "# nltk.download('stopwords')\n", " \n", "from nltk.corpus import stopwords\n", "from nltk.stem import PorterStemmer\n", "import string\n", "\n", "def cleaner(text):\n", " stemmer = PorterStemmer()\n", " stop = stopwords.words('english') \n", " text = text.replace('[',' ').replace(']',' ').replace('--', ' ')\n", " tokens = text.split()\n", " text = str.maketrans('', '', string.punctuation)\n", " tokens = [w.translate(text) for w in tokens]\n", " tokens = [word for word in tokens if word.isalpha()]\n", " tokens = [word.lower() for word in tokens]\n", " return tokens" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['part', 'i', 'concerning', 'god', 'definitions', 'i', 'by', 'that', 'which', 'is', 'self', 'caused', 'i', 'mean', 'that', 'of', 'which', 'the', 'essence', 'involves', 'existence', 'or', 'that', 'of', 'which', 'the', 'nature', 'is', 'only', 'conceivable', 'as', 'existent', 'ii', 'a', 'thing', 'is', 'called', 'finite', 'after', 'its', 'kind', 'when', 'it', 'can', 'be', 'limited', 'by', 'another', 'thing', 'of', 'the', 'same', 'nature', 'for', 'instance', 'a', 'body', 'is', 'called', 'finite', 'because', 'we', 'always', 'conceive', 'another', 'greater', 'body', 'so', 'also', 'a', 'thought', 'is', 'limited', 'by', 'another', 'thought', 'but', 'a', 'body', 'is', 'not', 'limited', 'by', 'thought', 'nor', 'a', 'thought', 'by', 'body', 'iii', 'by', 'substance', 'i', 'mean', 'that', 'which', 'is', 'in', 'itself', 'and']\n" ] } ], "source": [ "print(cleaner(raw)[0:100])" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['part', 'i', 'concerning', 'god', 'definitions', 'i', 'by', 'that', 'which', 'is']\n" ] } ], "source": [ "tokens = cleaner(raw)\n", "#raw.split()\n", "print(tokens[0:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Joining tokens to build `raw` after cleaning" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'part i concerning god definitions i by that which is sel'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw = ' '.join(tokens)\n", "raw[0:56]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Building sequences" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "n = 20\n", "sequences = list()\n", "for i in range(n, len(raw)):\n", " sequences.append(raw[i-n:i+1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Checking size\n", "\n", "There are around 180,000 sequences to be used for training." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "175247" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(sequences)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "first sequence is: part i concerning god\n", "second sequence is: art i concerning god \n", "third sequence is: rt i concerning god d\n" ] } ], "source": [ "print('first sequence is:',sequences[0])\n", "print('second sequence is:',sequences[1])\n", "print('third sequence is:',sequences[2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Saving our prepared sequences" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def save_txt(sequences, file):\n", " f = open(file, 'w')\n", " f.write('\\n'.join(sequences))\n", " f.close()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "out = 'ethics_sequences.txt';\n", "save_txt(sequences, out)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the sequences and checking for mistakes" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "first sequence is: part i concerning god\n", "second sequence is: art i concerning god \n", "third sequence is: rt i concerning god d\n" ] }, { "data": { "text/plain": [ "['part i concerning god',\n", " 'art i concerning god ',\n", " 'rt i concerning god d',\n", " 't i concerning god de',\n", " ' i concerning god def',\n", " 'i concerning god defi',\n", " ' concerning god defin',\n", " 'concerning god defini',\n", " 'oncerning god definit',\n", " 'ncerning god definiti']" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw = load_txt('ethics_sequences.txt')\n", "seqs = raw.split('\\n')\n", "print('first sequence is:',seqs[0])\n", "print('second sequence is:',seqs[1])\n", "print('third sequence is:',seqs[2])\n", "seqs[0:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Encoding\n", "\n", "We must now encode the sequences as a chain of integers. The list `unique_chars` is made of unique characters:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['\\n', ' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "unique_chars = sorted(list(set(raw)))\n", "unique_chars[0:10]" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The values corresponding to keys:\n", "\n", "['\\n', ' ', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'à', 'â', 'æ']\n", "are:\n", "\n", "{'\\n': 0, ' ': 1, 'a': 2, 'b': 3, 'c': 4, 'd': 5, 'e': 6, 'f': 7, 'g': 8, 'h': 9, 'i': 10, 'j': 11, 'k': 12, 'l': 13, 'm': 14, 'n': 15, 'o': 16, 'p': 17, 'q': 18, 'r': 19, 's': 20, 't': 21, 'u': 22, 'v': 23, 'w': 24, 'x': 25, 'y': 26, 'z': 27, 'à': 28, 'â': 29, 'æ': 30}\n" ] } ], "source": [ "char_int_map = dict((a, b) for b, a in enumerate(unique_chars))\n", "print('The values corresponding to keys:\\n')\n", "print(unique_chars)\n", "print('are:\\n')\n", "print(char_int_map)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Process sequences using the dictionary" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [], "source": [ "encoded_sequences = list()\n", "for seq in seqs:\n", " encoded_sequences.append([char_int_map[char] for char in seq])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Printing out sequences and their encoded form" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "part i concerning god\n", "[17, 2, 19, 21, 1, 10, 1, 4, 16, 15, 4, 6, 19, 15, 10, 15, 8, 1, 8, 16, 5]\n", "art i concerning god \n", "[2, 19, 21, 1, 10, 1, 4, 16, 15, 4, 6, 19, 15, 10, 15, 8, 1, 8, 16, 5, 1]\n" ] } ], "source": [ "print(sequences[0])\n", "print(encoded_sequences[0])\n", "print(sequences[1])\n", "print(encoded_sequences[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Building an array from the encoded sequences" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "encoded_sequences = array(encoded_sequences)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[17, 2, 19, ..., 8, 16, 5],\n", " [ 2, 19, 21, ..., 16, 5, 1],\n", " [19, 21, 1, ..., 5, 1, 5],\n", " ...,\n", " [24, 20, 1, ..., 17, 13, 2],\n", " [20, 1, 3, ..., 13, 2, 10],\n", " [ 1, 3, 6, ..., 2, 10, 15]])" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "encoded_sequences" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "X,y = encoded_sequences[:,:-1], encoded_sequences[:,-1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hot encoding" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "sequences = [to_categorical(x, num_classes=len(char_int_map)) for x in X]" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n", " [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sequences[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Features and targets" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [], "source": [ "X = array(sequences)\n", "y = to_categorical(y, num_classes=len(char_int_map))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "size = len(char_int_map)\n", "\n", "def define_model(X):\n", " model = Sequential()\n", " model.add(LSTM(75, input_shape=(X.shape[1], X.shape[2])))\n", " model.add(Dense(size, activation='softmax'))\n", " model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n", " model.summary()\n", " return model" ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "lstm_1 (LSTM) (None, 75) 32100 \n", "_________________________________________________________________\n", "dense_1 (Dense) (None, 31) 2356 \n", "=================================================================\n", "Total params: 34,456\n", "Trainable params: 34,456\n", "Non-trainable params: 0\n", "_________________________________________________________________\n" ] } ], "source": [ "model = define_model(X)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fitting and saving model and dictionary" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/30\n", " - 230s - loss: 2.0191 - acc: 0.4063\n", "Epoch 2/30\n", " - 196s - loss: 1.5064 - acc: 0.5549\n", "Epoch 3/30\n", " - 197s - loss: 1.3316 - acc: 0.6036\n", "Epoch 4/30\n", " - 196s - loss: 1.2390 - acc: 0.6287\n", "Epoch 5/30\n", " - 194s - loss: 1.1790 - acc: 0.6443\n", "Epoch 6/30\n", " - 193s - loss: 1.1355 - acc: 0.6546\n", "Epoch 7/30\n", " - 195s - loss: 1.1036 - acc: 0.6623\n", "Epoch 8/30\n", " - 192s - loss: 1.0777 - acc: 0.6697\n", "Epoch 9/30\n", " - 192s - loss: 1.0562 - acc: 0.6759\n", "Epoch 10/30\n", " - 192s - loss: 1.0383 - acc: 0.6810\n", "Epoch 11/30\n", " - 193s - loss: 1.0229 - acc: 0.6846\n", "Epoch 12/30\n", " - 193s - loss: 1.0085 - acc: 0.6894\n", "Epoch 13/30\n", " - 193s - loss: 0.9964 - acc: 0.6921\n", "Epoch 14/30\n", " - 194s - loss: 0.9862 - acc: 0.6951\n", "Epoch 15/30\n", " - 194s - loss: 0.9770 - acc: 0.6976\n", "Epoch 16/30\n", " - 195s - loss: 0.9674 - acc: 0.7014\n", "Epoch 17/30\n", " - 195s - loss: 0.9597 - acc: 0.7029\n", "Epoch 18/30\n", " - 196s - loss: 0.9526 - acc: 0.7049\n", "Epoch 19/30\n", " - 199s - loss: 0.9453 - acc: 0.7071\n", "Epoch 20/30\n", " - 195s - loss: 0.9388 - acc: 0.7085\n", "Epoch 21/30\n", " - 195s - loss: 0.9330 - acc: 0.7106\n", "Epoch 22/30\n", " - 195s - loss: 0.9271 - acc: 0.7115\n", "Epoch 23/30\n", " - 196s - loss: 0.9222 - acc: 0.7132\n", "Epoch 24/30\n", " - 194s - loss: 0.9178 - acc: 0.7147\n", "Epoch 25/30\n", " - 195s - loss: 0.9124 - acc: 0.7169\n", "Epoch 26/30\n", " - 195s - loss: 0.9090 - acc: 0.7171\n", "Epoch 27/30\n", " - 195s - loss: 0.9046 - acc: 0.7188\n", "Epoch 28/30\n", " - 200s - loss: 0.9003 - acc: 0.7192\n", "Epoch 29/30\n", " - 194s - loss: 0.8969 - acc: 0.7201\n", "Epoch 30/30\n", " - 194s - loss: 0.8939 - acc: 0.7220\n" ] } ], "source": [ "history = model.fit(X, y, epochs=30, verbose=2)" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [], "source": [ "loss = history.history['loss']" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEWCAYAAACJ0YulAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzt3Xl4XHd97/H3Vxoto21GtmRZktfYWUwSZ8FZaUISUkhCUkKhpQECCRSXlkJS+rTh0oWl9BZSaIFy0zQ0CQSKUy5JIMmFEKAJCdmdxUtiJ3i3LNmSLGvfpe/9Y47GsizJkqXRkWY+r+eZZ2bOOTPzPc/Y89Hv9zu/c8zdERERAcgKuwAREZk9FAoiIpKkUBARkSSFgoiIJCkUREQkSaEgIiJJCgWRcZjZMjNzM4tMYNsbzOw3M1GXSKooFCRtmNkuM+s1s7IRy18JftiXhVPZ5MJFJEwKBUk3O4Hrhp6Y2elANLxyROYWhYKkm+8BHxr2/MPAPcM3MLOYmd1jZg1mttvM/tbMsoJ12Wb2VTNrNLMdwDtHee2dZlZnZvvM7Etmlj2Vgs0sz8y+bma1we3rZpYXrCszs4fNrNnMmszsyWG13hLU0GZmr5vZ26ZShwgoFCT9PAuUmNmq4Mf6fcD3R2zzb0AMOAF4K4kQuTFY9zHgauAsYA3w3hGv/S7QD6wMtnk78MdTrPlvgPOBM4EzgHOBvw3W/SVQA5QDFcBnATezk4E/B85x92LgHcCuKdYholCQtDTUWvhdYCuwb2jFsKD4X+7e5u67gK8B1web/CHwdXff6+5NwD8Ne20FcCVws7t3uHs98K/AH02x3g8AX3T3endvAL4wrJ4+oBJY6u597v6kJ05YNgDkAW8ysxx33+Xu26dYh4hCQdLS94D3AzcwousIKANygd3Dlu0GqoPHVcDeEeuGLAVygLqgO6cZ+A9gwRTrrRqlnqrg8T8D24BHzWyHmX0GwN23ATcDnwfqzexeM6tCZIoUCpJ23H03iQHnq4D7R6xuJPHX99Jhy5ZwuDVRBywesW7IXqAHKHP3eHArcfdTp1hy7Sj11Ab70ubuf+nuJwDXAJ8eGjtw9x+4++8Er3XgK1OsQ0ShIGnro8Bl7t4xfKG7DwA/BP7RzIrNbCnwaQ6PO/wQ+JSZLTKzUuAzw15bBzwKfM3MSswsy8xWmNlbJ1FXnpnlD7tlAeuAvzWz8uBw2r8fqsfMrjazlWZmQCuJbqMBMzvZzC4LBqS7ga5gnciUKBQkLbn7dndfP8bqTwIdwA7gN8APgLuCdd8Gfg5sAF7i6JbGh0h0P70GHAJ+RKLPf6LaSfyAD90uA74ErAc2ApuCz/1SsP2JwC+D1z0D3Obuj5MYT/gyiZbPfhJdWJ+dRB0iozJdZEdERIaopSAiIkkKBRERSVIoiIhIkkJBRESS5twZG8vKynzZsmVhlyEiMqe8+OKLje5efqzt5lwoLFu2jPXrxzrSUERERmNmu4+9lbqPRERkGIWCiIgkKRRERCRJoSAiIkkKBRERSVIoiIhIkkJBRESSMiYUtu5v5dZHttLS2Rd2KSIis1bKQsHMFpvZY2a2xcxeNbObRtnGzOybZrbNzDaa2dmpqmfPwU5ue3w7u5s6jr2xiEiGSmVLoR/4S3dfBZwPfMLM3jRimytJXETkRGAt8O+pKqYqHgWgtrkrVR8hIjLnpSwU3L3O3V8KHrcBWzh8cfQh7wLu8YRngbiZTeYqVhM2FAr7mrtT8fYiImlhRsYUzGwZcBbw3IhV1SQuhj6khqODY1qUFuSQn5OlloKIyDhSHgpmVgTcB9zs7q0jV4/ykqOuD2pma81svZmtb2hoON46qIpHqWtRKIiIjCWloWBmOSQC4b/cfeQF0CHRMlg87PkioHbkRu5+h7uvcfc15eXHPPPrmKrjUXUfiYiMI5VHHxlwJ7DF3f9ljM0eBD4UHIV0PtDi7nWpqqkqFlX3kYjIOFJ5PYW3ANcDm8zslWDZZ4ElAO5+O/BT4CpgG9AJ3JjCeqiM59PQ1kNP/wB5kexUfpSIyJyUslBw998w+pjB8G0c+ESqahhp6Aik/S3dLJ1fOFMfKyIyZ2TMjGZIjCkA1GpcQURkVBkVCprAJiIyvowKhcpYPqBQEBEZS0aFQn5ONmVFudRqroKIyKgyKhQAKmOaqyAiMpaMC4WqeL66j0RExpCBoRClrrmLxNGwIiIyXMaFQnU8SkfvAK1d/WGXIiIy62RcKBw+hba6kERERsrYUNC4gojI0TIvFIbmKuiwVBGRo2RcKJQV5ZGTbeo+EhEZRcaFQlaWURmLUqe5CiIiR8m4UADNVRARGUuGhoIutiMiMpqMDIXqeJT9rd30DwyGXYqIyKySkaFQGYsy6HCgrSfsUkREZpWMDIWqeOKw1Dp1IYmIHCEjQ6Fas5pFREaVkaFQqctyioiMKiNDoSgvQiyaoyOQRERGyMhQAB2WKiIymswNhVi+xhREREbI3FCIR6lr0ZiCiMhwGR0KLV19tPfoYjsiIkMyOBQ0V0FEZKSMDQXNVRAROVrKQsHM7jKzejPbPMb6mJk9ZGYbzOxVM7sxVbWMpkpzFUREjpLKlsJ3gCvGWf8J4DV3PwO4BPiameWmsJ4jLCjOI8t0WU4RkeFSFgru/gTQNN4mQLGZGVAUbDtjo76R7CwWluTrspwiIsOEOabwLWAVUAtsAm5y91HPZW1ma81svZmtb2homLYCNIFNRORIYYbCO4BXgCrgTOBbZlYy2obufoe7r3H3NeXl5dNWQCIUNKYgIjIkzFC4EbjfE7YBO4FTZrKAxAS2LgYHfSY/VkRk1gozFPYAbwMwswrgZGDHTBZQHc+nb8BpbNfFdkREACKpemMzW0fiqKIyM6sBPgfkALj77cA/AN8xs02AAbe4e2Oq6hlNZSw4LLWlmwUl+TP50SIis1LKQsHdrzvG+lrg7an6/Ik4PFehizMXx8MsRURkVsjYGc1weFazjkASEUnI6FAoiUYozM3WqS5ERAIZHQpmprkKIiLDZHQoQOJ6zZqrICKSkPGhUB3Pp06nuhARARQKVMWiNLb30t03EHYpIiKhUygERyDp0pwiIgqFI+YqiIhkuowPBV2BTUTksIwPhYpYHgB1OgJJREShkBfJprw4T91HIiIoFIDgugo6LFVERKEAibkKGlMQEVEoAIm5CrXNXbjrYjsiktkUCiS6j7r7BjnU2Rd2KSIioVIoAFXxxAV2NNgsIplOoYAmsImIDFEooFAQERmiUADmF+aSG8miVuc/EpEMp1AgcbGd6nhUh6WKSMZTKASq4vnqPhKRjKdQCFTGojr/kYhkPIVCoCoe5UBbN30Dg2GXIiISGoVCoDqejzvs12CziGQwhUJAh6WKiCgUkpKhoLOlikgGS1komNldZlZvZpvH2eYSM3vFzF41s1+nqpaJqIoNtRTUfSQimSuVLYXvAFeMtdLM4sBtwO+5+6nAH6SwlmOK5mZTWpCj7iMRyWgpCwV3fwJoGmeT9wP3u/ueYPv6VNUyUVXxqEJBRDJamGMKJwGlZva4mb1oZh8aa0MzW2tm681sfUNDQ8oKSoSCuo9EJHOFGQoR4M3AO4F3AH9nZieNtqG73+Hua9x9TXl5ecoKqlZLQUQyXJihUAM84u4d7t4IPAGcEWI9VMXzaevpp7VbF9sRkcwUZij8BLjIzCJmVgCcB2wJsZ7kYak63YWIZKpIqt7YzNYBlwBlZlYDfA7IAXD32919i5k9AmwEBoH/dPcxD1+dCZWxwxPYTl5YHGYpIiKhSFkouPt1E9jmn4F/TlUNk1UdtBR0Cm0RyVSa0TxMeXEekSzTYLOIZCyFwjDZWcbCmK6rICKZS6EwQlU8qstyikjGUiiMoLkKIpLJFAojVMby2d/SzcCgh12KiMiMUyiMUBWP0j/oNLT1hF2KiMiMUyiMoMNSRSSTKRRG0BXYRCSTKRRGqIrnA1CnK7CJSAZSKIxQnJ9DcV5Ep9AWkYykUBjF8vJCNtY0h12GiMiMm1AomNkKM8sLHl9iZp8KLqeZlt5x6kJe2tOswWYRyTgTbSncBwyY2UrgTmA58IOUVRWyq1dXAvD/NtaGXImIyMyaaCgMuns/8G7g6+7+F0Bl6soK19L5haxeFOPhjXVhlyIiMqMmGgp9ZnYd8GHg4WBZTmpKmh2uXl3JxpoWdh/sCLsUEZEZM9FQuBG4APhHd99pZsuB76eurPC9c3UVgFoLIpJRJhQK7v6au3/K3deZWSlQ7O5fTnFtoaqOR3nz0lIe2qBxBRHJHBM9+uhxMysxs3nABuBuM/uX1JYWvmtWV7J1fxvb6tvCLkVEZEZMtPso5u6twO8Dd7v7m4HLU1fW7HDV6ZWYwUMb1IUkIplhoqEQMbNK4A85PNCc9haU5HP+8vk8tLEWd51KW0TS30RD4YvAz4Ht7v6CmZ0A/DZ1Zc0eV59RyY6GDrbUqQtJRNLfRAea/6+7r3b3Pw2e73D396S2tNnhytMqyc4yHtJENhHJABMdaF5kZg+YWb2ZHTCz+8xsUaqLmw3mFebylpVlPKwuJBHJABPtProbeBCoAqqBh4JlGeGa1ZXsbepiQ01L2KWIiKTUREOh3N3vdvf+4PYdoDyFdc0qbz91IbnZWTysOQsikuYmGgqNZvZBM8sObh8EDqaysNkkFs3h4pPKeXhjHYOD6kISkfQ10VD4CInDUfcDdcB7SZz6YkxmdlcwBrH5GNudY2YDZvbeCdYSimvOqGR/azcv7jkUdikiIikz0aOP9rj777l7ubsvcPdrSUxkG893gCvG28DMsoGvkDjcdVa7fFUF+TlZOu2FiKS1qVx57dPjrXT3J4CmY7zHJ0lcq6F+CnXMiMK8CJedsoCfbqqjf2Aw7HJERFJiKqFgU/lgM6smcX2G2yew7VozW29m6xsaGqbysVNy9eoqGtt7eW7nsbJORGRumkooTHXE9evALe4+cMwPcr/D3de4+5ry8vAOerr05AUU5mbzsCayiUiaGjcUzKzNzFpHubWRmLMwFWuAe81sF4mB69vM7NopvmdKRXOzufxNFfxs83761IUkImlo3FBw92J3LxnlVuzukal8sLsvd/dl7r4M+BHwZ+7+46m850y4ZnUVzZ19/GZbY9iliIhMu6l0H43LzNYBzwAnm1mNmX3UzD5uZh9P1WfOhItOKqM4P6KjkEQkLU3pr/3xuPt1k9j2hlTVMd3yItlccepCHtm8n+6+AfJzssMuSURk2qSspZDOrj6jiraefn79RnhHQomIpIJC4ThcuGI+8wpzeXijrsgmIulFoXAccrKzuOK0hfzytQN09vaHXY6IyLRRKByna1ZX0dU3wP9snfWTsUVEJkyhcJzOXT6P8uI8HYUkImlFoXCcsrOMd55eyWOvN9Dc2Rt2OSIi00KhMAXvO2cx/QODfPlnW8MuRURkWigUpmBVZQkfu+gE7n1hL09rhrOIpAGFwhTdfPlJLJ1fwGfu30RX7zHP7SciMqspFKYompvNP/3+6exp6uRff/lG2OWIiEyJQmEaXLiijOvOXcx/PrmDjTXNYZcjInLcFArT5DNXrqKsKI+//tFGnVZbROYshcI0iUVz+NK1p7F1fxv/8evtYZcjInJcFArT6O2nLuSdp1fyzV9tY1t9e9jliIhMmkJhmn3+904lmpvNZ+7byODgVK9YKiIysxQK06y8OI+/u/pNrN99iO8/tzvsckREJkWhkALvObuai04s4ys/28q+5q6wyxERmTCFQgqYGf/73acz6PA3D2zCXd1IIjI3KBRSZPG8Av7qHSfz+OsN/OQVnUlVROYGhUIKffjCZZy5OM4XHnqVg+09YZcjInJMCoUUys4ybn3vatp7+vnCQ6+FXY6IyDEpFFLspIpiPnHpSh7cUMsvXzsQdjkiIuNSKMyAP7tkJacsLOame19m/a6msMsRERmTQmEG5Eay+O5HzqWiJJ8P3/W8gkFEZi2FwgypKMln3drzWRAEw4u7FQwiMvsoFGZQRUk+6z42FAwv8OLuQ2GXJCJyhJSFgpndZWb1ZrZ5jPUfMLONwe1pMzsjVbXMJgtjiWAoK8oNWgwKBhGZPVLZUvgOcMU463cCb3X31cA/AHeksJZZZWEs0ZU0PwiGl/YoGERkdkhZKLj7E8CYHefu/rS7D/0aPgssSlUts1FlLMq9Q8Fw5/O8rGAQkVlgtowpfBT42VgrzWytma03s/UNDQ0zWFZqVcairPvY+ZQW5vKhO5/nlb26lKeIhCv0UDCzS0mEwi1jbePud7j7GndfU15ePnPFzYCqeJR1a88nXpjD9Xc+xwYFg4iEKNRQMLPVwH8C73L3g2HWEqbqeJR7115AvCCHDyoYRCREoYWCmS0B7geud/c3wqpjtqiOJ7qSYtFEMPxCp8QQkRCk8pDUdcAzwMlmVmNmHzWzj5vZx4NN/h6YD9xmZq+Y2fpU1TJXLCot4N6157NkXgEfu2c9n/vJZrr7BsIuS0QyiM21C8CsWbPG169P7/zo6R/g1kde587f7OSUhcV86/1nsXJBcdhlicgcZmYvuvuaY20X+kCzHC0vks3fXf0m7r7hHOrberjm357iv1/Yoyu4iUjKKRRmsUtPWcDPbrqIs5bEueW+TXxy3cu0dveFXZaIpDGFwixXUZLP9z56Hn/1jpP52eb9XPWNJzUDWkRSRqEwB2RnGZ+4dCU//JMLcIc/uP0Zbnt8G4OD6k4SkemlUJhD3ry0lJ/edBFXnLaQWx95nevveo79Ld1hlyUiaUShMMfEojl867qz+Mp7TufF3Ye49KuP89Wfv66xBhGZFgqFOcjMeN85S3j05rfyu2+q4FuPbePiWx/j20/s0LwGEZkSzVNIA6/WtnDrI6/z6zcaqIzl8xeXn8Tvn11NJFuZLyIJmqeQQU6tivHdj5zLuo+dT0VJPn9930au+MaTPLJ5v+Y2iMikKBTSyAUr5vPAn13I7R98M+7Ox7//Iu++7Wme2Z6x5xoUkUlSKKQZM+OK0xby85sv5tb3rOZAazfXfftZrr/zOZ7a1qiWg4iMS2MKaa67b4DvPbOb23+9nYMdvZxcUcwNb1nGu8+qJj8nO+zyRGSGTHRMQaGQIbr7BnhoQy13P7WL1+paiRfk8P5zl3D9BUupjEXDLk9EUkyhIKNyd57f2cRdT+3kF68dwMy48rSFfOR3lnP2ktKwyxORFJloKERmohiZPcyM806Yz3knzGdvUyf3PLOLe1/Yy8Mb6zhjcZyPvGUZV55WSW5Ew00imUgtBaGjp5/7X6rh7qd2saOxg9KCHK5eXcW1Z1Vz9pI4ZhZ2iSIyReo+kkkbHHSe+G0D9720j0df3U9P/yBL5xdw7ZnVXHtWNcvLCsMuUUSOk0JBpqStu4+fv3qAH7+8j6e2N+IOZy6O8+6zqrl6dSXzi/LCLlFEJkGhINNmf0s3D27YxwMv17KlrpVIlnHxSeW868wqLj1lASX5OWGXKCLHoFCQlNi6v5Ufv1zLT17ZR11LN5Es49zl83jbqgouX7WApfPVxSQyGykUJKUGB52X9hzil1vq+Z+tB3jjQDsAK8oLuXxVBW9bVcHZS+I6KZ/ILKFQkBm152Anv9p6gP/ZWs+zOw7SN+DEC3K45KRyLltVwcUnlhEvyA27TJGMpVCQ0LR19/Hkbxv51ZZ6Hnu9nqaOXszg1KoSLlxRxgUr5nPOsnkU5WmajMhMUSjIrDAw6Lyyt5mntjXy9PZGXtrdTO/AIJEs44zFcS5cMZ8LVszn7CWlOheTSAopFGRW6u4b4MXdh3h6eyNPbTvIxppmBh1yI1msWVrKBcFs69WLYgoJkWmkUJA5obW7jxd2NvH09oM8vf0gW+pagURInLkozrnL53Hu8nmcvbRU3U0iUxB6KJjZXcDVQL27nzbKegO+AVwFdAI3uPtLx3pfhUJ6O9TRy/rdh3h+50Ge39nE5tpWBgad7Czj1KoSzl02j3OWz+OcZfOYV6iBa5GJmg2hcDHQDtwzRihcBXySRCicB3zD3c871vsqFDJLR08/L+05xPM7m3h+ZxMv722mt38QgBPKCjmtOsbqRTFOr45xanVMrQmRMYR+llR3f8LMlo2zybtIBIYDz5pZ3Mwq3b0uVTXJ3FOYF+GiE8u56MRyAHr6B9hY08LzO5vYsLeZ9buaeHBDLQBmiaBYvSjO6dUxTl8U49SqEgpyFRQiExXm/5ZqYO+w5zXBsqNCwczWAmsBlixZMiPFyeyUF8nmnGWJ7qMhDW09bN7XwqZ9LWysaeHp7Y088PI+ALIMTigv4uSFxZxcUcxJFcWcVFHE0vmFZGfp7K8iI4UZCqP9jxy1L8vd7wDugET3USqLkrmnvDiPS09ZwKWnLEguq2/tZlMQFJv3tbKppoWfbqpjqLc0L5LFygVFQUgUc/LCxOOqWJQshYVksDBDoQZYPOz5IqA2pFokzSwoyedtJfm8bVVFcllnbz/b6tt5fX8bbxxo440D7Ty742CyVQEQzcnmhPJCVpQXJe9XlBexvKyQaK4OkZX0F2YoPAj8uZndS2KguUXjCZJKBbkRVi+Ks3pR/IjlLV19/PZAG68faGN7fQfbG9p5ee8hHtpYm2xZmEFVLMqKBUWsCMJi5YLEbX5hri5EJGkjZaFgZuuAS4AyM6sBPgfkALj77cBPSRx5tI3EIak3pqoWkfHEojmsWTaPNcPGKSAx0W5nYyIkdjQk7rc3tLN+VxOdvQPJ7eIFOawcFhIrFhSxsryI6ri6omTu0eQ1kUlyd+pautlW3564NbQnHzd19Ca3G+qKWlZWyOLSAhaVRlk8r4DFpVGq4lHN2JYZFfohqSLpysyoiid+2C8+qfyIdU0dvYfDIgiMV/e18Oir++kbOPIPsIqSPBaVJkJi8bxEaFTHC6gujVIZy1doSCgUCiLTaF5hbvLUHMMNDjoH2rrZ29RFzaFO9jZ1sfdQJzWHOnlh1yEe3FDL4IhGe1lRHtWlURbFo1SXRqmK5VNdWkB18DwW1RXvZPopFERmQFaWURmLUhmLHhUYAH0Dg+xv6WZfcxf7DnUl72tbuthS18ovtxygJ5jJPaQ4L0J1aTQZEiPvy4vyNAAuk6ZQEJkFcrKzEuMN8wpGXe/uNLb3UtvcdURw1BzqpOZQF8/vaqKtu/+I1+RFsqiKR1lYkk9lLJ+FQ7eSfCpjUSpieZQV5mkwXI6gUBCZA8yM8uI8yovzOGNxfNRtWrv7EmFxKBEW+5q7qG3uZn9rN8/tbOJAazf9I/qoIllGRUkiLCpK8lhQnM+CofviPBaU5FFRnE+8IEetjgyhUBBJEyX5OZRU5rCqsmTU9YODTmNHD/tbuhO31m7qWro50JK437q/jSffaKStp/+o1+ZmZyVDqWJEaCwozg+W5zO/MFctjzlOoSCSIbKyLPgxz2f1orG36+ztp761h/q2Hurbug8/bu2mvq2HnY0dPLeziebOvqNem51llBXlJkOjrCiPeUW5zC/MZV5wm194eJmOsJp9FAoicoSC3AjLyiIsKyscd7vuvgEa2hKB0dDWHQRHIkgOtPZQ25I4/1RTR+9R3VZDCnOzmVeUy7yCXOIFuZQW5AT3ucQLcogX5FA64nlRXkRdWSmkUBCR45Kfkz3u4PgQd6e1u5+mjl6aOno42N7LwY5emjp6OdgeLOvo5VBnLzsa22nu7Dtq0Hy43EgW5UV5lBXlUh60RsqSz/MpK8qlrDgxiF6UH9HZcCdJoSAiKWVmxKI5xKI5LD9G62NI38AgLV19NHf20dzZy6Hgvrmzj8aOHhraeoKjsbrZUJNojQyM0RopyM2mKC9CcX6EovwcivMiw55HKM6LUBINWiSFiZZKPHheEs3JuFBRKIjIrJOTnZVsAUzE4KBzqLOXhvYeGtt6aWxPtD7au/tp6+6jvaeftp5+2rr7ae/uo76tO1jXT3tvP2Od7ccscW6seDTniO6toW6teMGRIZIOXVwKBRGZ87KyjPlFecwvyoOFk3vt4KDT1t3Poc5emrv6EvedvRzqCFonXX3JlkpDew+/rU90cbWPcpTWkEiWUZwfoTg/J9kqKc7PCe6DVkre4eeFuREKgxZMQV6iZVOYF6EgJ3vGj+ZSKIhIRsvKMmIFOcQKJnfakMNdXIlureFdXIc6e2kLWilt3YlWyr7mruTz9p7+Mbu7RirIzU4GxgfOW8IfX3TC8ezmhCkURESOw2S7uIZzd7r6BoLg6KejJ7j1DtDRkwiNzt5+2nsGjlhXXjz5z5oshYKIyAwzMwpyIxTkRqgYfa5haLLCLkBERGYPhYKIiCQpFEREJEmhICIiSQoFERFJUiiIiEiSQkFERJIUCiIikmQ+1pmgZikzawB2H+fLy4DGaSxnNki3fUq3/YH026d02x9Iv30abX+Wunv5sV4450JhKsxsvbuvCbuO6ZRu+5Ru+wPpt0/ptj+Qfvs0lf1R95GIiCQpFEREJCnTQuGOsAtIgXTbp3TbH0i/fUq3/YH026fj3p+MGlMQEZHxZVpLQURExqFQEBGRpIwJBTO7wsxeN7NtZvaZsOuZDma2y8w2mdkrZrY+7Homy8zuMrN6M9s8bNk8M/uFmf02uC8Ns8bJGmOfPm9m+4Lv6RUzuyrMGifDzBab2WNmtsXMXjWzm4Llc/J7Gmd/5vJ3lG9mz5vZhmCfvhAsX25mzwXf0X+bWe6E3i8TxhTMLBt4A/hdoAZ4AbjO3V8LtbApMrNdwBp3n5OTbszsYqAduMfdTwuW3Qo0ufuXg/AudfdbwqxzMsbYp88D7e7+1TBrOx5mVglUuvtLZlYMvAhcC9zAHPyextmfP2TufkcGFLp7u5nlAL8BbgI+Ddzv7vea2e3ABnf/92O9X6a0FM4Ftrn7DnfvBe4F3hVyTRnP3Z8AmkYsfhfw3eDxd0n8h50zxtinOcvd69z9peBxG7AFqGaOfk/j7M+c5QntwdOc4ObAZcCPguUT/o4yJRSqgb3Dntcwx/8hBBx41MxeNLO1YRczTSrcvQ4S/4GBBSHXM13+3Mw2Bt1Lc6KrZSQzWwacBTxHGnxPI/YH5vB3ZGbZZvYKUA/8AtgONLt7f7DJhH/7ofSgAAADNElEQVTzMiUUbJRl6dBv9hZ3Pxu4EvhE0HUhs8+/AyuAM4E64GvhljN5ZlYE3Afc7O6tYdczVaPsz5z+jtx9wN3PBBaR6BlZNdpmE3mvTAmFGmDxsOeLgNqQapk27l4b3NcDD5D4xzDXHQj6fYf6f+tDrmfK3P1A8J92EPg2c+x7Cvqp7wP+y93vDxbP2e9ptP2Z69/REHdvBh4HzgfiZhYJVk34Ny9TQuEF4MRgND4X+CPgwZBrmhIzKwwGyjCzQuDtwObxXzUnPAh8OHj8YeAnIdYyLYZ+PAPvZg59T8Eg5p3AFnf/l2Gr5uT3NNb+zPHvqNzM4sHjKHA5ibGSx4D3BptN+DvKiKOPAIJDzL4OZAN3ufs/hlzSlJjZCSRaBwAR4AdzbZ/MbB1wCYnT/B4APgf8GPghsATYA/yBu8+Zgdsx9ukSEt0SDuwC/mSoP362M7PfAZ4ENgGDweLPkuiHn3Pf0zj7cx1z9ztaTWIgOZvEH/o/dPcvBr8R9wLzgJeBD7p7zzHfL1NCQUREji1Tuo9ERGQCFAoiIpKkUBARkSSFgoiIJCkUREQkSaEgMoKZDQw7W+Yr03lWXTNbNvwMqiKzTeTYm4hknK7glAEiGUctBZEJCq5f8ZXg3PXPm9nKYPlSM/tVcDK1X5nZkmB5hZk9EJznfoOZXRi8VbaZfTs49/2jwSxUkVlBoSBytOiI7qP3DVvX6u7nAt8iMUOe4PE97r4a+C/gm8HybwK/dvczgLOBV4PlJwL/x91PBZqB96R4f0QmTDOaRUYws3Z3Lxpl+S7gMnffEZxUbb+7zzezRhIXbukLlte5e5mZNQCLhp9aIDhd8y/c/cTg+S1Ajrt/KfV7JnJsaimITI6P8XisbUYz/PwzA2hsT2YRhYLI5Lxv2P0zweOnSZx5F+ADJC6HCPAr4E8heRGUkpkqUuR46S8UkaNFg6tYDXnE3YcOS80zs+dI/EF1XbDsU8BdZvZXQANwY7D8JuAOM/soiRbBn5K4gIvIrKUxBZEJCsYU1rh7Y9i1iKSKuo9ERCRJLQUREUlSS0FERJIUCiIikqRQEBGRJIWCiIgkKRRERCTp/wMHHuFaiJViUwAAAABJRU5ErkJggg==\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "import numpy as np\n", "%matplotlib inline\n", "\n", "fig = plt.figure()\n", "plt.plot(loss)\n", "plt.title('Model Loss')\n", "plt.xlabel('Epoch')\n", "plt.ylabel('Loss')\n", "#pylab.xlim([0,60])\n", "fig.savefig('loss.png')\n", "plt.show();" ] }, { "cell_type": "code", "execution_count": 59, "metadata": {}, "outputs": [], "source": [ "model.save('model.h5')" ] }, { "cell_type": "code", "execution_count": 60, "metadata": {}, "outputs": [], "source": [ "dump(char_int_map, open('char_int_map.pkl', 'wb'))" ] }, { "cell_type": "code", "execution_count": 61, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "keras.models.Sequential" ] }, "execution_count": 61, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating sequences" ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "from pickle import load\n", "from numpy import array\n", "from keras.models import load_model\n", "from keras.utils import to_categorical\n", "from keras.preprocessing.sequence import pad_sequences" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [], "source": [ "def gen_seq(model, char_int_map, n_seq, test_seq, size_gen):\n", " num_classes=len(char_int_map)\n", " txt = test_seq\n", " print(txt)\n", " # generate a fixed number of characters\n", " for i in range(size_gen):\n", " encoded = pad_sequences([[char_int_map[c] for c in txt]], \n", " maxlen=n_seq, truncating='pre')\n", " encoded = to_categorical(encoded, num_classes=num_classes)\n", " ypred = model.predict_classes(encoded)\n", " int_to_char = ''\n", " for c, idx in char_int_map.items():\n", " if idx == ypred:\n", " int_to_char = c\n", " break\n", " # append to input\n", " txt += int_to_char\n", " return txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Loading the model and the dictionary" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "# load the model\n", "model = load_model('model.h5')\n", "# load the mapping\n", "char_int_map = load(open('char_int_map.pkl', 'rb'))" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "that which is self caused\n", "that which is self caused by another modifications of the human b\n", "nature for instance a body\n", "nature for instance a body in the same as the constitution of the \n" ] } ], "source": [ "# test start of rhyme\n", "print(gen_seq(model, char_int_map, 20, 'that which is self caused', 40))\n", "print(gen_seq(model, char_int_map, 20, 'nature for instance a body', 40))" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'part i concerning god definitions i by that which is self caused i mean that of which the essence involves existence or that of which the nature is only conceivable as existent ii a thing is called finite after its kind when it can be limited by another thing of the same nature for instance a body is called finite because we always conceive another greater body so also a thought is limited by anot'" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "raw[0:400]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Correct\n", "\n", " 1) \"that which is self caused i mean that of which the essence\"\n", " 2) \"nature for instance a body is called finite because we always conceive\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Conclusion\n", "\n", "The model has to learn better probably by increasing the number of epochs." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 2 }