{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# NEURAL NETWORKS\n", "\n", "This notebook covers the neural network algorithms from chapter 18 of the book *Artificial Intelligence: A Modern Approach*, by Stuart Russel and Peter Norvig. The code in the notebook can be found in [learning.py](https://github.com/aimacode/aima-python/blob/master/learning.py).\n", "\n", "Execute the below cell to get started:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from learning import *\n", "\n", "from notebook import psource, pseudocode" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## NEURAL NETWORK ALGORITHM\n", "\n", "### Overview\n", "\n", "Although the Perceptron may seem like a good way to make classifications, it is a linear classifier (which, roughly, means it can only draw straight lines to divide spaces) and therefore it can be stumped by more complex problems. To solve this issue we can extend Perceptron by employing multiple layers of its functionality. The construct we are left with is called a Neural Network, or a Multi-Layer Perceptron, and it is a non-linear classifier. It achieves that by combining the results of linear functions on each layer of the network.\n", "\n", "Similar to the Perceptron, this network also has an input and output layer; however, it can also have a number of hidden layers. These hidden layers are responsible for the non-linearity of the network. The layers are comprised of nodes. Each node in a layer (excluding the input one), holds some values, called *weights*, and takes as input the output values of the previous layer. The node then calculates the dot product of its inputs and its weights and then activates it with an *activation function* (e.g. sigmoid activation function). Its output is then fed to the nodes of the next layer. Note that sometimes the output layer does not use an activation function, or uses a different one from the rest of the network. The process of passing the outputs down the layer is called *feed-forward*.\n", "\n", "After the input values are fed-forward into the network, the resulting output can be used for classification. The problem at hand now is how to train the network (i.e. adjust the weights in the nodes). To accomplish that we utilize the *Backpropagation* algorithm. In short, it does the opposite of what we were doing up to this point. Instead of feeding the input forward, it will track the error backwards. So, after we make a classification, we check whether it is correct or not, and how far off we were. We then take this error and propagate it backwards in the network, adjusting the weights of the nodes accordingly. We will run the algorithm on the given input/dataset for a fixed amount of time, or until we are satisfied with the results. The number of times we will iterate over the dataset is called *epochs*. In a later section we take a detailed look at how this algorithm works.\n", "\n", "NOTE: Sometimes we add another node to the input of each layer, called *bias*. This is a constant value that will be fed to the next layer, usually set to 1. The bias generally helps us \"shift\" the computed function to the left or right." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![neural_net](images/neural_net.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation\n", "\n", "The `NeuralNetLearner` function takes as input a dataset to train upon, the learning rate (in (0, 1]), the number of epochs and finally the size of the hidden layers. This last argument is a list, with each element corresponding to one hidden layer.\n", "\n", "After that we will create our neural network in the `network` function. This function will make the necessary connections between the input layer, hidden layer and output layer. With the network ready, we will use the `BackPropagationLearner` to train the weights of our network for the examples provided in the dataset.\n", "\n", "The NeuralNetLearner returns the `predict` function which, in short, can receive an example and feed-forward it into our network to generate a prediction.\n", "\n", "In more detail, the example values are first passed to the input layer and then they are passed through the rest of the layers. Each node calculates the dot product of its inputs and its weights, activates it and pushes it to the next layer. The final prediction is the node in the output layer with the maximum value." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "def NeuralNetLearner(dataset, hidden_layer_sizes=None,\n",
" learning_rate=0.01, epochs=100, activation = sigmoid):\n",
" """Layered feed-forward network.\n",
" hidden_layer_sizes: List of number of hidden units per hidden layer\n",
" learning_rate: Learning rate of gradient descent\n",
" epochs: Number of passes over the dataset\n",
" """\n",
"\n",
" hidden_layer_sizes = hidden_layer_sizes or [3] # default value\n",
" i_units = len(dataset.inputs)\n",
" o_units = len(dataset.values[dataset.target])\n",
"\n",
" # construct a network\n",
" raw_net = network(i_units, hidden_layer_sizes, o_units, activation)\n",
" learned_net = BackPropagationLearner(dataset, raw_net,\n",
" learning_rate, epochs, activation)\n",
"\n",
" def predict(example):\n",
" # Input nodes\n",
" i_nodes = learned_net[0]\n",
"\n",
" # Activate input layer\n",
" for v, n in zip(example, i_nodes):\n",
" n.value = v\n",
"\n",
" # Forward pass\n",
" for layer in learned_net[1:]:\n",
" for node in layer:\n",
" inc = [n.value for n in node.inputs]\n",
" in_val = dotproduct(inc, node.weights)\n",
" node.value = node.activation(in_val)\n",
"\n",
" # Hypothesis\n",
" o_nodes = learned_net[-1]\n",
" prediction = find_max_node(o_nodes)\n",
" return prediction\n",
"\n",
" return predict\n",
"
def BackPropagationLearner(dataset, net, learning_rate, epochs, activation=sigmoid):\n",
" """[Figure 18.23] The back-propagation algorithm for multilayer networks"""\n",
" # Initialise weights\n",
" for layer in net:\n",
" for node in layer:\n",
" node.weights = random_weights(min_value=-0.5, max_value=0.5,\n",
" num_weights=len(node.weights))\n",
"\n",
" examples = dataset.examples\n",
" '''\n",
" As of now dataset.target gives an int instead of list,\n",
" Changing dataset class will have effect on all the learners.\n",
" Will be taken care of later.\n",
" '''\n",
" o_nodes = net[-1]\n",
" i_nodes = net[0]\n",
" o_units = len(o_nodes)\n",
" idx_t = dataset.target\n",
" idx_i = dataset.inputs\n",
" n_layers = len(net)\n",
"\n",
" inputs, targets = init_examples(examples, idx_i, idx_t, o_units)\n",
"\n",
" for epoch in range(epochs):\n",
" # Iterate over each example\n",
" for e in range(len(examples)):\n",
" i_val = inputs[e]\n",
" t_val = targets[e]\n",
"\n",
" # Activate input layer\n",
" for v, n in zip(i_val, i_nodes):\n",
" n.value = v\n",
"\n",
" # Forward pass\n",
" for layer in net[1:]:\n",
" for node in layer:\n",
" inc = [n.value for n in node.inputs]\n",
" in_val = dotproduct(inc, node.weights)\n",
" node.value = node.activation(in_val)\n",
"\n",
" # Initialize delta\n",
" delta = [[] for _ in range(n_layers)]\n",
"\n",
" # Compute outer layer delta\n",
"\n",
" # Error for the MSE cost function\n",
" err = [t_val[i] - o_nodes[i].value for i in range(o_units)]\n",
"\n",
" # The activation function used is relu or sigmoid function\n",
" if node.activation == sigmoid:\n",
" delta[-1] = [sigmoid_derivative(o_nodes[i].value) * err[i] for i in range(o_units)]\n",
" else:\n",
" delta[-1] = [relu_derivative(o_nodes[i].value) * err[i] for i in range(o_units)]\n",
"\n",
" # Backward pass\n",
" h_layers = n_layers - 2\n",
" for i in range(h_layers, 0, -1):\n",
" layer = net[i]\n",
" h_units = len(layer)\n",
" nx_layer = net[i+1]\n",
"\n",
" # weights from each ith layer node to each i + 1th layer node\n",
" w = [[node.weights[k] for node in nx_layer] for k in range(h_units)]\n",
"\n",
" if activation == sigmoid:\n",
" delta[i] = [sigmoid_derivative(layer[j].value) * dotproduct(w[j], delta[i+1])\n",
" for j in range(h_units)]\n",
" else:\n",
" delta[i] = [relu_derivative(layer[j].value) * dotproduct(w[j], delta[i+1])\n",
" for j in range(h_units)]\n",
"\n",
" # Update weights\n",
" for i in range(1, n_layers):\n",
" layer = net[i]\n",
" inc = [node.value for node in net[i-1]]\n",
" units = len(layer)\n",
" for j in range(units):\n",
" layer[j].weights = vector_add(layer[j].weights,\n",
" scalar_vector_product(\n",
" learning_rate * delta[i][j], inc))\n",
"\n",
" return net\n",
"