{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# The Art of Training Neural Networks with Keras\n", "\n", "
\n", "\n", "## Tools in the Deep Learning Ecosystem\n", "\n", "\n", "\n", "
\n", "\n", "* Hardware GPU vs CPU: At the core hardware level, we have CPUs and GPUs executing instructions. \n", "\n", "
\n", "\n", " * A CPU is able to execute large, sequential instructions but can only execute a small number of instructions parallelly\n", " \n", "
\n", " \n", " * A GPU can execute hundreds of small instructions parallelly\n", " \n", "\n", "\n", "* For deep learning where we have to do a bunch of linear algebraic computations parallelly, GPUs are exponentially faster than CPUs\n", "\n", "
\n", "\n", "* Compute Frameworks such as BLAS and CUDA help routine computations to be optimised for the specific processor instruction set for accelerated compute\n", "\n", "
\n", " \n", " * Basic Linear Algebra Subprograms are a bunch of routines that specify the low level routines for many linear algebraic operations. R, numpy, Matlab use BLAS to accelerate linear algebra operations. Ex: Intel's implementation of BLAS is known as Intel Math Kernel Library (MKL)\n", " \n", "
\n", " \n", " * CUDA is a framework created by NVIDIA that helps programmers write software to perform general purpose computing tasks on GPUs, most of the deep learning libraries use CUDA\n", "\n", "
\n", "\n", "* Libraries with support for Autodifferentiation to help in gradient computation for stacked layers were developed such as Theano, Tensorflow, CNTK and PyTorch. \n", "\n", "* These libraries are operations level (dot product, etc.) and the code is low level and hard to write for quick prototyping of new networks\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "\n", "* That is wehre Keras comes into picture, it is a high level library with the abstraction at the __layer__ level. It was built as an abstraction to Theano and later support was added to Tensorflow and CNTK\n", "\n", "* So you can write code in Keras that can run on any of these three deep learning library __backends__" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Building Blocks of Neural Networks\n", "\n", "### Layers, Data and Learning Representations\n", "\n", "
\n", "\n", "* __Layers__: logically grouped operations in a neural network, the parameters for the operations in the layer learn to generate the best features to predict the target\n", " \n", "\n", "\n", "\n", "\n", "
\n", "\n", "* The network needs to have __input data__ and corresponding __targets (y)__\n", "\n", "
\n", "\n", "* In traditional machine learning we see that changing the representation of the data (kernel trick, etc.) helps ease the process of learning from data\n", "\n", "
\n", "\n", "* __Activation function__ adds that non linearity and in combination with weights (parameters) of a layer, the network learns better representations of the data at each layer\n", "\n", "
\n", "\n", "### So basically, each layer takes input as data and spits out transformed data as output, simple as that. Now, let's dive into the details\n", "\n", "\n", "\n", "
\n", "\n", "* The goal of training neural networks is to find these perfect representation of data, which we get by \"learning\" the right weights\n", "\n", "\n", "\n", "
\n", " \n", "* The loss function, which defines the feedback signal used for learning helps guage __how different are the targets and the predicted targets__\n", "\n", "\n", "\n", "
\n", " \n", "* The optimizer, based on the feedback signal from the loss function changes the parameters / weights of the network to help make the predictions as close to the target as possible (minimizing the loss function)\n", "\n", "
\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Keras Interface\n", "\n", "
\n", "\n", "* There are two major ways to define and run neural netwroks using the Keras API\n", " \n", "
\n", " \n", " * Sequential API\n", " \n", "
\n", " \n", " * Functional API\n", "\n", "
\n", "\n", "### The Sequential API\n", "\n", "* The sequential api allows us to __quickly stack layers__ and build networks\n", "\n", "\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "\n", "### The Functional API\n", "\n", "* The functional api allows us to build complex graph networks, we can kee chaining the the layers as functions and finally the `Model(inputs, outputs)` class connects all the various inputs and outputs\n", "\n", "
\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Layers in Keras" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* There are different categories of layers in Keras, the most commonly used ones are \"Dense Layers\", \"Convolution Layers\", \"Recurrent Layers\", etc." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: tensorflow in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (2.1.0)\n", "Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (2.1.0)\n", "Requirement already satisfied: absl-py>=0.7.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (0.9.0)\n", "Requirement already satisfied: six>=1.12.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.12.0)\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (3.1.0)\n", "Requirement already satisfied: wheel>=0.26; python_version >= \"3\" in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (0.33.6)\n", "Requirement already satisfied: numpy<2.0,>=1.16.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.17.2)\n", "Requirement already satisfied: wrapt>=1.11.1 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.11.2)\n", "Requirement already satisfied: astor>=0.6.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (0.8.1)\n", "Requirement already satisfied: keras-applications>=1.0.8 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.0.8)\n", "Requirement already satisfied: gast==0.2.2 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (0.2.2)\n", "Requirement already satisfied: protobuf>=3.8.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (3.11.2)\n", "Requirement already satisfied: tensorflow-estimator<2.2.0,>=2.1.0rc0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (2.1.0)\n", "Requirement already satisfied: keras-preprocessing>=1.1.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.1.0)\n", "Requirement already satisfied: scipy==1.4.1; python_version >= \"3\" in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.4.1)\n", "Requirement already satisfied: google-pasta>=0.1.6 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (0.1.8)\n", "Requirement already satisfied: termcolor>=1.1.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.1.0)\n", "Requirement already satisfied: grpcio>=1.8.6 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorflow) (1.26.0)\n", "Requirement already satisfied: requests<3,>=2.21.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (2.22.0)\n", "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (0.4.1)\n", "Requirement already satisfied: markdown>=2.6.8 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (3.1.1)\n", "Requirement already satisfied: google-auth<2,>=1.6.3 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (1.11.0)\n", "Requirement already satisfied: werkzeug>=0.11.15 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (0.16.0)\n", "Requirement already satisfied: setuptools>=41.0.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow) (41.4.0)\n", "Requirement already satisfied: h5py in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from keras-applications>=1.0.8->tensorflow) (2.9.0)\n", "Requirement already satisfied: certifi>=2017.4.17 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (2019.9.11)\n", "Requirement already satisfied: idna<2.9,>=2.5 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (2.8)\n", "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (1.24.2)\n", "Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.2.0,>=2.1.0->tensorflow) (3.0.4)\n", "Requirement already satisfied: requests-oauthlib>=0.7.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow) (1.3.0)\n", "Requirement already satisfied: pyasn1-modules>=0.2.1 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (0.2.8)\n", "Requirement already satisfied: rsa<4.1,>=3.1.4 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (4.0)\n", "Requirement already satisfied: cachetools<5.0,>=2.0.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (4.0.0)\n", "Requirement already satisfied: oauthlib>=3.0.0 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow) (3.1.0)\n", "Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /Users/ankitjain/opt/anaconda3/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.2.0,>=2.1.0->tensorflow) (0.4.8)\n" ] } ], "source": [ "#!pip install tensorflow" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "from keras.layers import Dense" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import keras" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['Activation',\n", " 'ActivityRegularization',\n", " 'Add',\n", " 'AlphaDropout',\n", " 'AtrousConvolution1D',\n", " 'AtrousConvolution2D',\n", " 'Average',\n", " 'AveragePooling1D',\n", " 'AveragePooling2D',\n", " 'AveragePooling3D',\n", " 'AvgPool1D',\n", " 'AvgPool2D',\n", " 'AvgPool3D',\n", " 'BatchNormalization',\n", " 'Bidirectional',\n", " 'Concatenate',\n", " 'Conv1D',\n", " 'Conv2D',\n", " 'Conv2DTranspose',\n", " 'Conv3D',\n", " 'Conv3DTranspose',\n", " 'ConvLSTM2D',\n", " 'ConvLSTM2DCell',\n", " 'ConvRecurrent2D',\n", " 'Convolution1D',\n", " 'Convolution2D',\n", " 'Convolution3D',\n", " 'Cropping1D',\n", " 'Cropping2D',\n", " 'Cropping3D',\n", " 'CuDNNGRU',\n", " 'CuDNNLSTM',\n", " 'Deconvolution2D',\n", " 'Deconvolution3D',\n", " 'Dense',\n", " 'DepthwiseConv2D',\n", " 'Dot',\n", " 'Dropout',\n", " 'ELU',\n", " 'Embedding',\n", " 'Flatten',\n", " 'GRU',\n", " 'GRUCell',\n", " 'GaussianDropout',\n", " 'GaussianNoise',\n", " 'GlobalAveragePooling1D',\n", " 'GlobalAveragePooling2D',\n", " 'GlobalAveragePooling3D',\n", " 'GlobalAvgPool1D',\n", " 'GlobalAvgPool2D',\n", " 'GlobalAvgPool3D',\n", " 'GlobalMaxPool1D',\n", " 'GlobalMaxPool2D',\n", " 'GlobalMaxPool3D',\n", " 'GlobalMaxPooling1D',\n", " 'GlobalMaxPooling2D',\n", " 'GlobalMaxPooling3D',\n", " 'Highway',\n", " 'Input',\n", " 'InputLayer',\n", " 'InputSpec',\n", " 'LSTM',\n", " 'LSTMCell',\n", " 'Lambda',\n", " 'Layer',\n", " 'LeakyReLU',\n", " 'LocallyConnected1D',\n", " 'LocallyConnected2D',\n", " 'Masking',\n", " 'MaxPool1D',\n", " 'MaxPool2D',\n", " 'MaxPool3D',\n", " 'MaxPooling1D',\n", " 'MaxPooling2D',\n", " 'MaxPooling3D',\n", " 'Maximum',\n", " 'MaxoutDense',\n", " 'Minimum',\n", " 'Multiply',\n", " 'PReLU',\n", " 'Permute',\n", " 'RNN',\n", " 'ReLU',\n", " 'Recurrent',\n", " 'RepeatVector',\n", " 'Reshape',\n", " 'SeparableConv1D',\n", " 'SeparableConv2D',\n", " 'SimpleRNN',\n", " 'SimpleRNNCell',\n", " 'Softmax',\n", " 'SpatialDropout1D',\n", " 'SpatialDropout2D',\n", " 'SpatialDropout3D',\n", " 'StackedRNNCells',\n", " 'Subtract',\n", " 'ThresholdedReLU',\n", " 'TimeDistributed',\n", " 'UpSampling1D',\n", " 'UpSampling2D',\n", " 'UpSampling3D',\n", " 'ZeroPadding1D',\n", " 'ZeroPadding2D',\n", " 'ZeroPadding3D',\n", " '__builtins__',\n", " '__cached__',\n", " '__doc__',\n", " '__file__',\n", " '__loader__',\n", " '__name__',\n", " '__package__',\n", " '__path__',\n", " '__spec__',\n", " 'absolute_import',\n", " 'add',\n", " 'advanced_activations',\n", " 'average',\n", " 'concatenate',\n", " 'convolutional',\n", " 'convolutional_recurrent',\n", " 'core',\n", " 'cudnn_recurrent',\n", " 'deserialize',\n", " 'deserialize_keras_object',\n", " 'dot',\n", " 'embeddings',\n", " 'local',\n", " 'maximum',\n", " 'merge',\n", " 'minimum',\n", " 'multiply',\n", " 'noise',\n", " 'normalization',\n", " 'pooling',\n", " 'recurrent',\n", " 'serialize',\n", " 'subtract',\n", " 'wrappers']" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dir(keras.layers)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "layer1 = Dense(units = 32, activation = 'sigmoid')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Let's break down the above layer that we just built using the Dense class from keras\n", "\n", "
\n", "\n", "* The OUTPUT from the above layer can be given by `sigmoid ( dot ( W, INPUT ) + b )`, in the first round of training the W (weights) are randomly \"initialized\"\n", "\n", "
\n", "\n", "* Every input to the Dense layer (also known as fully connected) is connected to every unit in the hidden layer, as shown in the figure below\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "\n", "* We will see many more categories and types of __Layers__ in keras, the __\"Dense\"__ layer is just one such class" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Stacking the layers together : The Keras Sequential API\n", "\n", "
\n", "\n", "* The keras sequential api enables us to build common yet complex neural network architectures flexibly\n", "\n", "
\n", "\n", "* Objects of the Keras sequential class, can have multiple neural network layers stacked on top of one another\n", "\n", "
\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "* You can create a Keras sequential model by passing in a list of layers to the Sequential object" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from keras.models import Sequential\n", "from keras.layers import Dense\n", "\n", "model = Sequential([\n", " Dense(32, input_shape=(784,), activation = 'sigmoid'),\n", " Dense(10, activation = 'sigmoid'),\n", " Dense(9,activation='sigmoid'),\n", " Dense(1, activation = 'sigmoid')\n", "])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* But soon this becomes a problem to add more layers in a single list, so the \"add\" method on the sequential class object can add layers sequentially to the neural network" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "neural_network = Sequential()\n", "\n", "neural_network.add(Dense(32, input_dim=784, activation = 'sigmoid'))\n", "\n", "neural_network.add(Dense(10, activation = 'sigmoid'))\n", "\n", "neural_network.add(Dense(1, activation = 'sigmoid'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* The summary method on the neural network object gives us basic information about the structure of the network" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model: \"sequential_2\"\n", "_________________________________________________________________\n", "Layer (type) Output Shape Param # \n", "=================================================================\n", "dense_5 (Dense) (None, 32) 25120 \n", "_________________________________________________________________\n", "dense_6 (Dense) (None, 10) 330 \n", "_________________________________________________________________\n", "dense_7 (Dense) (None, 1) 11 \n", "=================================================================\n", "Total params: 25,461\n", "Trainable params: 25,461\n", "Non-trainable params: 0\n", "_________________________________________________________________\n" ] } ], "source": [ "neural_network.summary()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* That's all! Keras is that simple, initialize an object of the Sequential class, use the add method on that object to add layers to the network sequentially, but wait what about the loss function? what about the optimizer?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Defining, Compiling and Running a Neural Network in Keras\n", "\n", "
\n", "\n", "### The process of learning in a Neural Network\n", "\n", "
\n", "\n", "### Loss Score\n", "\n", "
\n", "\n", "* The loss score is feedback signal that says how far is the output of your network compared to the ground truth\n", "\n", "
\n", "\n", "\n", "\n", "
\n", "\n", "* Two such loss scores that we use quite frequently are :\n", " \n", " 1) Binary Cross-Entropy: For Two-Class Classification problems\n", " \n", " 2) Mean Squared Error: For Regression problems\n", " \n", "
\n", "\n", "* We have already come across mean squared error before, so let's dive deeper into Binary Cross Entropy\n", "\n", "
\n", "\n", "$$\\begin{eqnarray} \n", " C = -\\frac{1}{n} \\sum_x \\left[y \\ln p + (1-y ) \\ln (1-p) \\right]\n", "\\end{eqnarray}$$\n", "\n", "
\n", "\n", "* In the above equation, p is the output of the network, n is the total number of samples in the training data, the sum is over all training inputs, x, and y is the corresponding desired output\n", "\n", "
\n", "\n", "* Below, we see the value of the cross entropy (sometimes referred to as the log loss) changing with the predicted probability, we can see that the value of the loss for a prediction above 0.5 significantly drops and this helps the network converge much faster than using a traditional mean squared error\n", "\n", "
\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optimizers\n", "\n", "
\n", "\n", "* An optimizer is an algorithm that uses the feedback signal from the loss function, to actually update the weights so that the output from the network gets closer to the ground truth. The first optimizer that we use is Stochastic Gradient Descent (SGD), we will slowly come across many more optimizers\n", "\n", "
\n", "\n", "* We can import Classes from the optimizers module of keras and customize the specific optimizer to our liking\n", "\n", "
\n", "\n", " " ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from keras.optimizers import SGD\n", "\n", "customized_optimizer = SGD(lr = 0.0001)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* We already know how learning rate can effect convergence, the graph below provides a decent intuition, hence having the flexibility to change the learning rate is very important" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](img/learning_rate.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Compiling the neural network ( loss function + optimizer)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "neural_network.compile(loss = 'binary_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "neural_network.compile(loss = 'binary_crossentropy', optimizer = customized_optimizer, metrics = ['accuracy'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* As we can see from the compile step above we need to specify the loss function, optimization algorithm and we can also mention the metrics that we want to monitor while training the neural network\n", "\n", "
\n", "\n", "* Also, please __note__ that the optimizer argument can either be a string or a customizable object from the optimizers module in Keras" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## So are we done? what about learning the weights? \n", "\n", "
\n", "\n", "* Training the network in Keras is also very simple, we call the `.fit()` method and pass in the arguments\n", "\n", "
\n", "\n", "* Some important terms for training neural networks are epochs, batch_size" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "# NOTE : Don't run the following line of code as we do not yet have X_train and y_train\n", "\n", "neural_network.fit(X_train, y_train, epochs=100, batch_size=64)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* An __Epoch__ is when an ENTIRE dataset is passed forward and backward through the neural network only once.\n", "\n", "
\n", "\n", "* Since most of the times an epoch is too large to fit in memory, we divide the data into batches and compute the gradient on batches for each forward and backward pass\n", "\n", "
\n", "\n", "* __Batch size__ is the number of samples that are going to be propagated through the network.\n", "\n", "
\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Neural Network Architectures for Basic ML Tasks\n", "\n", "
\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 2 }