{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Network Example with Keras" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**(C) 2018-2024 by [Damir Cavar](http://damir.cavar.me/)**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Version:** 1.2, January 2024" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**License:** [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/) ([CA BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Prerequisites:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U numpy" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install -U keras" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 and 2019 at Indiana University." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This material is based on Jason Brownlee's tutorial [Develop Your First Neural Network in Python With Keras Step-By-Step](https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/). See for more details and explanations this page. All copyrights are his, except on a few small comments that I added." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Keras](https://keras.io/) is a neural network module that is running on top of [TensorFlow](https://github.com/tensorflow/tensorflow) (among others). Make sure that you install [TensorFlow](https://github.com/tensorflow/tensorflow) on your system. Go to the [Keras](https://keras.io/) homepage and install the module in Python. This example also requires that *Scipy* and *Numpy* are installed in your system." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Introduction" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As explained in the above tutorial, the steps are:\n", "\n", "- loading data (prepared for the process, that is vectorized and formated)\n", "- defining a model (layers)\n", "- compiling the model\n", "- fitting the model\n", "- evaluating the model\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have to import the necessary modules from Keras:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from keras.models import Sequential\n", "from keras.layers import Dense" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use numpy as well:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import numpy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In his tutorial, as linked above, Jason Brownlee suggests that we initialize the random number generator with a fixed number to make sure that the results are the same at every run, since the learning algorithm makes use of a stochastic process. We initialize the random number generator with 7:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "numpy.random.seed(7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data-set suggested in Brownlee's tutorial is [Pima Indians Diabetes Data Set](https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes). The required file can be downloaded [using this link](http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data). It is available in the local *data* subfolder with the *.csv* filename-ending." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "dataset = numpy.loadtxt(\"data/pima-indians-diabetes.csv\", delimiter=\",\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data is organized as follows: the first 8 columns per row define the features, that is the input variables for the neural network. The last column defines the output as a binary value of $0$ or $1$. We can separate those two from the dataset into two variables:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "X = dataset[:,0:8]\n", "Y = dataset[:,8]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just to verify the content:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 6. , 148. , 72. , ..., 33.6 , 0.627, 50. ],\n", " [ 1. , 85. , 66. , ..., 26.6 , 0.351, 31. ],\n", " [ 8. , 183. , 64. , ..., 23.3 , 0.672, 32. ],\n", " ...,\n", " [ 5. , 121. , 72. , ..., 26.2 , 0.245, 30. ],\n", " [ 1. , 126. , 60. , ..., 30.1 , 0.349, 47. ],\n", " [ 1. , 93. , 70. , ..., 30.4 , 0.315, 23. ]])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "X" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1., 0., 1., 0., 1., 0., 1., 0., 1., 1., 0., 1., 0., 1., 1., 1., 1.,\n", " 1., 0., 1., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0., 0., 1., 0., 0.,\n", " 0., 0., 0., 1., 1., 1., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0.,\n", " 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 1., 0.,\n", " 0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1.,\n", " 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 1., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 1., 1., 1., 0., 0.,\n", " 0., 1., 0., 0., 0., 1., 1., 0., 0., 1., 1., 1., 1., 1., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.,\n", " 1., 1., 0., 0., 0., 1., 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 1.,\n", " 1., 1., 1., 0., 0., 1., 1., 0., 1., 0., 1., 1., 1., 0., 0., 0., 0.,\n", " 0., 0., 1., 1., 0., 1., 0., 0., 0., 1., 1., 1., 1., 0., 1., 1., 1.,\n", " 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 0., 1., 1., 1.,\n", " 1., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 1., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 0., 1., 1., 0.,\n", " 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0.,\n", " 0., 0., 1., 1., 1., 0., 0., 1., 0., 1., 0., 1., 1., 0., 1., 0., 0.,\n", " 1., 0., 1., 1., 0., 0., 1., 0., 1., 0., 0., 1., 0., 1., 0., 1., 1.,\n", " 1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 1.,\n", " 1., 0., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 1., 0., 0., 0.,\n", " 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0.,\n", " 1., 0., 0., 1., 0., 0., 1., 0., 1., 1., 0., 1., 0., 1., 0., 1., 0.,\n", " 1., 1., 0., 0., 0., 0., 1., 1., 0., 1., 0., 1., 0., 0., 0., 0., 1.,\n", " 1., 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0.,\n", " 0., 1., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 1., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0.,\n", " 1., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0.,\n", " 1., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.,\n", " 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 1., 1., 1., 0.,\n", " 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 0., 1., 1., 0., 0., 0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 0., 0.,\n", " 1., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 1.,\n", " 1., 0., 1., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n", " 0., 1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0.,\n", " 1., 1., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 1., 1.,\n", " 1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1., 0.,\n", " 0., 1., 0., 1., 0., 0., 0., 0., 0., 1., 0., 1., 0., 1., 0., 1., 1.,\n", " 0., 0., 0., 0., 1., 1., 0., 0., 0., 1., 0., 1., 1., 0., 0., 1., 0.,\n", " 0., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1.,\n", " 1., 1., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 1., 0., 0., 1., 0.,\n", " 1., 1., 1., 0., 0., 1., 1., 1., 0., 1., 0., 1., 0., 1., 0., 0., 0.,\n", " 0., 1., 0.])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Y" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will define our model in the next step. The first layer is the input layer. It is set to have 8 inputs for the 8 variables using the attribute *input_dim*. The *Dense* class defines the layers to be fully connected. The number of neurons is specified as the first argument to the initializer. We are choosing also the activation function using the *activation* attribute. This should be clear from the presentations in class and other examples and discussions on related notebooks here in this collection. The output layer consists of one neuron and uses the *sigmoid* activation function to return a weight between $0$ and $1$:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "model = Sequential()\n", "model.add(Dense(12, input_dim=8, activation='relu'))\n", "model.add(Dense(8, activation='relu'))\n", "model.add(Dense(1, activation='sigmoid'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The defined network needs to be compiled. The compilation process creates a specific implementation of it using the backend (e.g. TensorFlow or Theano), decides whether a GPU or a CPU will be used, which loss and optimization function to select, and which metrics should be collected during training. In this case we use the **binary cross-entropy** as a loss function, the efficient implementation of a gradient decent algorithm called **Adam**, and we store the classification accuracy for the output and analysis." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The training of the model is achieved by calling the *fit* method. The parameters specify the input matrix and output vector in our case, as well as the number of iterations through the data set for training, called **epochs**. The **batch size** specifies the number of instances that are evaluated before an update of the parameters is applied." ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.5882 - acc: 0.7305\n", "Epoch 2/150\n", "768/768 [==============================] - 0s 326us/step - loss: 0.5404 - acc: 0.7435\n", "Epoch 3/150\n", "768/768 [==============================] - 0s 335us/step - loss: 0.5319 - acc: 0.7487\n", "Epoch 4/150\n", "768/768 [==============================] - 0s 342us/step - loss: 0.5148 - acc: 0.7578\n", "Epoch 5/150\n", "768/768 [==============================] - 0s 369us/step - loss: 0.5291 - acc: 0.7461\n", "Epoch 6/150\n", "768/768 [==============================] - 0s 391us/step - loss: 0.5063 - acc: 0.7708\n", "Epoch 7/150\n", "768/768 [==============================] - 0s 366us/step - loss: 0.5107 - acc: 0.7487\n", "Epoch 8/150\n", "768/768 [==============================] - 0s 338us/step - loss: 0.5117 - acc: 0.7409\n", "Epoch 9/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.5027 - acc: 0.7487\n", "Epoch 10/150\n", "768/768 [==============================] - 0s 346us/step - loss: 0.5008 - acc: 0.7604\n", "Epoch 11/150\n", "768/768 [==============================] - 0s 361us/step - loss: 0.5021 - acc: 0.7617\n", "Epoch 12/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4994 - acc: 0.7565\n", "Epoch 13/150\n", "768/768 [==============================] - 0s 348us/step - loss: 0.5132 - acc: 0.7474\n", "Epoch 14/150\n", "768/768 [==============================] - 0s 342us/step - loss: 0.4934 - acc: 0.7747\n", "Epoch 15/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4877 - acc: 0.7682\n", "Epoch 16/150\n", "768/768 [==============================] - 0s 328us/step - loss: 0.5093 - acc: 0.7526\n", "Epoch 17/150\n", "768/768 [==============================] - 0s 327us/step - loss: 0.5231 - acc: 0.7422\n", "Epoch 18/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4977 - acc: 0.7669\n", "Epoch 19/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4998 - acc: 0.7630\n", "Epoch 20/150\n", "768/768 [==============================] - 0s 344us/step - loss: 0.4934 - acc: 0.7630\n", "Epoch 21/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4985 - acc: 0.7604\n", "Epoch 22/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.5037 - acc: 0.7695\n", "Epoch 23/150\n", "768/768 [==============================] - 0s 359us/step - loss: 0.4868 - acc: 0.7656\n", "Epoch 24/150\n", "768/768 [==============================] - 0s 368us/step - loss: 0.5001 - acc: 0.7656\n", "Epoch 25/150\n", "768/768 [==============================] - 0s 374us/step - loss: 0.4929 - acc: 0.7461\n", "Epoch 26/150\n", "768/768 [==============================] - 0s 372us/step - loss: 0.4953 - acc: 0.7721\n", "Epoch 27/150\n", "768/768 [==============================] - 0s 354us/step - loss: 0.4897 - acc: 0.7721\n", "Epoch 28/150\n", "768/768 [==============================] - 0s 367us/step - loss: 0.4877 - acc: 0.7643\n", "Epoch 29/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4896 - acc: 0.7578\n", "Epoch 30/150\n", "768/768 [==============================] - 0s 363us/step - loss: 0.4871 - acc: 0.7786\n", "Epoch 31/150\n", "768/768 [==============================] - 0s 351us/step - loss: 0.4974 - acc: 0.7539\n", "Epoch 32/150\n", "768/768 [==============================] - 0s 342us/step - loss: 0.4804 - acc: 0.7747\n", "Epoch 33/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4863 - acc: 0.7721\n", "Epoch 34/150\n", "768/768 [==============================] - 0s 335us/step - loss: 0.4779 - acc: 0.7721\n", "Epoch 35/150\n", "768/768 [==============================] - 0s 338us/step - loss: 0.4714 - acc: 0.7891\n", "Epoch 36/150\n", "768/768 [==============================] - 0s 345us/step - loss: 0.4895 - acc: 0.7604\n", "Epoch 37/150\n", "768/768 [==============================] - 0s 355us/step - loss: 0.4859 - acc: 0.7656\n", "Epoch 38/150\n", "768/768 [==============================] - 0s 343us/step - loss: 0.4836 - acc: 0.7682\n", "Epoch 39/150\n", "768/768 [==============================] - 0s 369us/step - loss: 0.5042 - acc: 0.7656\n", "Epoch 40/150\n", "768/768 [==============================] - 0s 371us/step - loss: 0.4709 - acc: 0.7917\n", "Epoch 41/150\n", "768/768 [==============================] - 0s 367us/step - loss: 0.4850 - acc: 0.7812\n", "Epoch 42/150\n", "768/768 [==============================] - 0s 374us/step - loss: 0.4851 - acc: 0.7604\n", "Epoch 43/150\n", "768/768 [==============================] - 0s 345us/step - loss: 0.4756 - acc: 0.7734\n", "Epoch 44/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4765 - acc: 0.7721\n", "Epoch 45/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4812 - acc: 0.7695\n", "Epoch 46/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4844 - acc: 0.7630\n", "Epoch 47/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4709 - acc: 0.7839\n", "Epoch 48/150\n", "768/768 [==============================] - 0s 348us/step - loss: 0.4860 - acc: 0.7721\n", "Epoch 49/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4741 - acc: 0.7669\n", "Epoch 50/150\n", "768/768 [==============================] - 0s 373us/step - loss: 0.4833 - acc: 0.7747\n", "Epoch 51/150\n", "768/768 [==============================] - 0s 389us/step - loss: 0.4753 - acc: 0.7799\n", "Epoch 52/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4740 - acc: 0.7760\n", "Epoch 53/150\n", "768/768 [==============================] - 0s 344us/step - loss: 0.4847 - acc: 0.7747\n", "Epoch 54/150\n", "768/768 [==============================] - 0s 335us/step - loss: 0.4684 - acc: 0.7747\n", "Epoch 55/150\n", "768/768 [==============================] - 0s 332us/step - loss: 0.4698 - acc: 0.7812\n", "Epoch 56/150\n", "768/768 [==============================] - 0s 341us/step - loss: 0.4764 - acc: 0.7695\n", "Epoch 57/150\n", "768/768 [==============================] - 0s 341us/step - loss: 0.4672 - acc: 0.7721\n", "Epoch 58/150\n", "768/768 [==============================] - 0s 377us/step - loss: 0.4598 - acc: 0.7747\n", "Epoch 59/150\n", "768/768 [==============================] - 0s 394us/step - loss: 0.4746 - acc: 0.7682\n", "Epoch 60/150\n", "768/768 [==============================] - 0s 344us/step - loss: 0.4762 - acc: 0.7839\n", "Epoch 61/150\n", "768/768 [==============================] - 0s 344us/step - loss: 0.4669 - acc: 0.7747\n", "Epoch 62/150\n", "768/768 [==============================] - 0s 346us/step - loss: 0.4597 - acc: 0.7852\n", "Epoch 63/150\n", "768/768 [==============================] - 0s 359us/step - loss: 0.4578 - acc: 0.7904\n", "Epoch 64/150\n", "768/768 [==============================] - 0s 363us/step - loss: 0.4617 - acc: 0.7826\n", "Epoch 65/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4622 - acc: 0.7812\n", "Epoch 66/150\n", "768/768 [==============================] - 0s 342us/step - loss: 0.4709 - acc: 0.7839\n", "Epoch 67/150\n", "768/768 [==============================] - 0s 334us/step - loss: 0.4549 - acc: 0.7865\n", "Epoch 68/150\n", "768/768 [==============================] - 0s 345us/step - loss: 0.4578 - acc: 0.7891\n", "Epoch 69/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4575 - acc: 0.7826\n", "Epoch 70/150\n", "768/768 [==============================] - 0s 352us/step - loss: 0.4612 - acc: 0.7826\n", "Epoch 71/150\n", "768/768 [==============================] - 0s 340us/step - loss: 0.4592 - acc: 0.7773\n", "Epoch 72/150\n", "768/768 [==============================] - 0s 357us/step - loss: 0.4614 - acc: 0.7826\n", "Epoch 73/150\n", "768/768 [==============================] - 0s 364us/step - loss: 0.4555 - acc: 0.7826\n", "Epoch 74/150\n", "768/768 [==============================] - 0s 369us/step - loss: 0.4520 - acc: 0.7904\n", "Epoch 75/150\n", "768/768 [==============================] - 0s 346us/step - loss: 0.4654 - acc: 0.7943\n", "Epoch 76/150\n", "768/768 [==============================] - 0s 376us/step - loss: 0.4558 - acc: 0.7786\n", "Epoch 77/150\n", "768/768 [==============================] - 0s 385us/step - loss: 0.4489 - acc: 0.7773\n", "Epoch 78/150\n", "768/768 [==============================] - 0s 359us/step - loss: 0.4574 - acc: 0.7917\n", "Epoch 79/150\n", "768/768 [==============================] - 0s 362us/step - loss: 0.4557 - acc: 0.7812\n", "Epoch 80/150\n", "768/768 [==============================] - 0s 342us/step - loss: 0.4463 - acc: 0.7956\n", "Epoch 81/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4467 - acc: 0.7982\n", "Epoch 82/150\n", "768/768 [==============================] - 0s 366us/step - loss: 0.4595 - acc: 0.7865\n", "Epoch 83/150\n", "768/768 [==============================] - 0s 336us/step - loss: 0.4549 - acc: 0.7956\n", "Epoch 84/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4509 - acc: 0.7812\n", "Epoch 85/150\n", "768/768 [==============================] - 0s 348us/step - loss: 0.4490 - acc: 0.7891\n", "Epoch 86/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4573 - acc: 0.8047\n", "Epoch 87/150\n", "768/768 [==============================] - 0s 336us/step - loss: 0.4465 - acc: 0.7969\n", "Epoch 88/150\n", "768/768 [==============================] - 0s 329us/step - loss: 0.4503 - acc: 0.7630\n", "Epoch 89/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4557 - acc: 0.7930\n", "Epoch 90/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4510 - acc: 0.7891\n", "Epoch 91/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4547 - acc: 0.7917\n", "Epoch 92/150\n", "768/768 [==============================] - 0s 373us/step - loss: 0.4506 - acc: 0.8060\n", "Epoch 93/150\n", "768/768 [==============================] - 0s 344us/step - loss: 0.4486 - acc: 0.7826\n", "Epoch 94/150\n", "768/768 [==============================] - 0s 339us/step - loss: 0.4425 - acc: 0.7956\n", "Epoch 95/150\n", "768/768 [==============================] - 0s 366us/step - loss: 0.4654 - acc: 0.7865\n", "Epoch 96/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4474 - acc: 0.7891\n", "Epoch 97/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4413 - acc: 0.7943\n", "Epoch 98/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4434 - acc: 0.8034\n", "Epoch 99/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4457 - acc: 0.7982\n", "Epoch 100/150\n", "768/768 [==============================] - 0s 370us/step - loss: 0.4394 - acc: 0.7878\n", "Epoch 101/150\n", "768/768 [==============================] - 0s 357us/step - loss: 0.4480 - acc: 0.7891\n", "Epoch 102/150\n", "768/768 [==============================] - 0s 352us/step - loss: 0.4457 - acc: 0.7930\n", "Epoch 103/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4298 - acc: 0.7956\n", "Epoch 104/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4470 - acc: 0.7812\n", "Epoch 105/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4504 - acc: 0.7826\n", "Epoch 106/150\n", "768/768 [==============================] - 0s 369us/step - loss: 0.4340 - acc: 0.7956\n", "Epoch 107/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4339 - acc: 0.7878\n", "Epoch 108/150\n", "768/768 [==============================] - 0s 346us/step - loss: 0.4543 - acc: 0.7812\n", "Epoch 109/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4363 - acc: 0.7930\n", "Epoch 110/150\n", "768/768 [==============================] - 0s 343us/step - loss: 0.4387 - acc: 0.7786\n", "Epoch 111/150\n", "768/768 [==============================] - 0s 354us/step - loss: 0.4409 - acc: 0.7891\n", "Epoch 112/150\n", "768/768 [==============================] - 0s 366us/step - loss: 0.4557 - acc: 0.7786\n", "Epoch 113/150\n", "768/768 [==============================] - 0s 352us/step - loss: 0.4395 - acc: 0.7995\n", "Epoch 114/150\n", "768/768 [==============================] - 0s 340us/step - loss: 0.4387 - acc: 0.8047\n", "Epoch 115/150\n", "768/768 [==============================] - 0s 355us/step - loss: 0.4431 - acc: 0.7839\n", "Epoch 116/150\n", "768/768 [==============================] - 0s 350us/step - loss: 0.4425 - acc: 0.7812\n", "Epoch 117/150\n", "768/768 [==============================] - 0s 373us/step - loss: 0.4329 - acc: 0.7904\n", "Epoch 118/150\n", "768/768 [==============================] - 0s 347us/step - loss: 0.4363 - acc: 0.7891\n", "Epoch 119/150\n", "768/768 [==============================] - 0s 336us/step - loss: 0.4532 - acc: 0.7734\n", "Epoch 120/150\n", "768/768 [==============================] - 0s 362us/step - loss: 0.4482 - acc: 0.7826\n", "Epoch 121/150\n", "768/768 [==============================] - 0s 362us/step - loss: 0.4305 - acc: 0.8021\n", "Epoch 122/150\n", "768/768 [==============================] - 0s 365us/step - loss: 0.4296 - acc: 0.7982\n", "Epoch 123/150\n", "768/768 [==============================] - 0s 359us/step - loss: 0.4330 - acc: 0.7904\n", "Epoch 124/150\n", "768/768 [==============================] - 0s 354us/step - loss: 0.4318 - acc: 0.7930\n", "Epoch 125/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4385 - acc: 0.7891\n", "Epoch 126/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4328 - acc: 0.7904\n", "Epoch 127/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4430 - acc: 0.7904\n", "Epoch 128/150\n", "768/768 [==============================] - 0s 381us/step - loss: 0.4525 - acc: 0.7943\n", "Epoch 129/150\n", "768/768 [==============================] - 0s 357us/step - loss: 0.4371 - acc: 0.7969\n", "Epoch 130/150\n", "768/768 [==============================] - 0s 366us/step - loss: 0.4237 - acc: 0.7982\n", "Epoch 131/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4318 - acc: 0.7969\n", "Epoch 132/150\n", "768/768 [==============================] - 0s 345us/step - loss: 0.4267 - acc: 0.8047\n", "Epoch 133/150\n", "768/768 [==============================] - 0s 377us/step - loss: 0.4386 - acc: 0.7956\n", "Epoch 134/150\n", "768/768 [==============================] - 0s 353us/step - loss: 0.4150 - acc: 0.8242\n", "Epoch 135/150\n", "768/768 [==============================] - 0s 346us/step - loss: 0.4354 - acc: 0.7943\n", "Epoch 136/150\n", "768/768 [==============================] - 0s 361us/step - loss: 0.4342 - acc: 0.7943\n", "Epoch 137/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4333 - acc: 0.7826\n", "Epoch 138/150\n", "768/768 [==============================] - 0s 349us/step - loss: 0.4443 - acc: 0.7878\n", "Epoch 139/150\n", "768/768 [==============================] - 0s 359us/step - loss: 0.4229 - acc: 0.7969\n", "Epoch 140/150\n", "768/768 [==============================] - 0s 358us/step - loss: 0.4382 - acc: 0.7930\n", "Epoch 141/150\n", "768/768 [==============================] - 0s 345us/step - loss: 0.4379 - acc: 0.7982\n", "Epoch 142/150\n", "768/768 [==============================] - 0s 363us/step - loss: 0.4345 - acc: 0.7839\n", "Epoch 143/150\n", "768/768 [==============================] - 0s 352us/step - loss: 0.4330 - acc: 0.7917\n", "Epoch 144/150\n", "768/768 [==============================] - 0s 372us/step - loss: 0.4351 - acc: 0.7799\n", "Epoch 145/150\n", "768/768 [==============================] - 0s 363us/step - loss: 0.4214 - acc: 0.8099\n", "Epoch 146/150\n", "768/768 [==============================] - 0s 364us/step - loss: 0.4374 - acc: 0.7865\n", "Epoch 147/150\n", "768/768 [==============================] - 0s 356us/step - loss: 0.4249 - acc: 0.8086\n", "Epoch 148/150\n", "768/768 [==============================] - 0s 355us/step - loss: 0.4271 - acc: 0.7995\n", "Epoch 149/150\n", "768/768 [==============================] - 0s 363us/step - loss: 0.4330 - acc: 0.7995\n", "Epoch 150/150\n", "768/768 [==============================] - 0s 357us/step - loss: 0.4300 - acc: 0.7904\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.fit(X, Y, epochs=150, batch_size=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The evaluation is available via the *evaluate* method. In our case we print out the accuracy:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "768/768 [==============================] - 0s 49us/step\n", "\n", "acc: 78.52%\n" ] } ], "source": [ "scores = model.evaluate(X, Y)\n", "print(\"\\n%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now make predictions by calling the *predict* method with the input matrix as a parameter. In this case we are using the training data to predict the output classifier. This is in general not a good idea. Here it just serves the purpose of showing how the methods are used:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "predictions = model.predict(X)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0]\n" ] } ], "source": [ "rounded = [round(x[0]) for x in predictions]\n", "print(rounded)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "latex_metadata": { "affiliation": "Indiana University, Bloomington, IN, USA", "author": "Damir Cavar", "title": "Neural Network Example with Keras" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": false, "sideBar": false, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": false, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }