{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"***\n",
"***\n",
"# Intro to Neural Networks\n",
"\n",
"Handwritten Digits Recognization\n",
"\n",
"***\n",
"***\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T11:13:11.585052Z",
"start_time": "2019-01-15T11:13:11.577254Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"**The Neuron: A Biological Information Processor**\n",
"\n",
"- dentrites - the receivers\n",
"- soma - neuron cell body (sums input signals)\n",
"- axon - the transmitter\n",
"- synapse 突触 - point of transmission\n",
"- neuron activates after a certain threshold is met\n",
"\n",
"Learning occurs via electro-chemical changes in effectiveness of synaptic junction. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T11:22:04.940334Z",
"start_time": "2019-01-15T11:22:04.932572Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"**An Artificial Neuron: The Perceptron simulated on hardware or by software**\n",
"- input connections - the receivers\n",
"- node, unit, or PE simulates neuron body\n",
"- output connection - the transmitter\n",
"- activation function employs a threshold or bias\n",
"- connection weights act as synaptic junctions\n",
"\n",
"Learning occurs via changes in value of the connection weights. \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"Neural Networks consist of the following components\n",
"\n",
"- An **input layer**, **x**\n",
"- An arbitrary amount of **hidden layers**\n",
"- An **output layer**, **ŷ**\n",
"- A set of **weights** and **biases** between each layer, **W and b**\n",
"- A choice of **activation function** for each hidden layer, **σ**. \n",
" - e.g., Sigmoid activation function.\n",
"\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"ANNs incorporate the two fundamental components of biological neural nets:\n",
"\n",
"\n",
"- Neurones (nodes)\n",
"- Synapses (weights)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T14:02:35.343059Z",
"start_time": "2019-01-15T14:02:35.338589Z"
},
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [
{
"data": {
"text/plain": [
"-0.5"
]
},
"execution_count": 100,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"1*0.25 + 0.5*(-1.5)"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T11:05:51.546122Z",
"start_time": "2019-01-15T11:05:51.540662Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Each iteration of the training process consists of the following steps:\n",
"\n",
"1. Calculating the predicted output **ŷ**, known as `feedforward`\n",
"1. Updating the weights and biases, known as `backpropagation`"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T06:09:32.966910Z",
"start_time": "2019-01-15T06:09:32.963111Z"
},
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"\n",
"\n",
"**activation function** for each hidden layer, **σ**. "
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T06:09:32.966910Z",
"start_time": "2019-01-15T06:09:32.963111Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"The output **ŷ** of a simple 2-layer Neural Network is:\n",
"\n",
"$$ \\widehat{y} = \\sigma (w_2 z + b_2) = \\sigma(w_2 \\sigma(w_1 x + b_1) + b_2)$$\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"**Lost or Cost function**\n",
"\n",
"\\begin{eqnarray} C(w,b) \\equiv\n",
" \\frac{1}{2n} \\sum_x \\| y(x) - a\\|^2.\n",
"\\tag{6}\\end{eqnarray}\n",
"\n",
"Here, w denotes the collection of all weights in the network, b all the biases, n is the total number of training inputs, a is the vector of outputs from the network when x is input, and the sum is over all training inputs, x. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Chain rule for calculating derivative of the **loss function** with respect to the weights. \n",
"\n",
"\n",
"\n",
"Note that for simplicity, we have only displayed the partial derivative assuming a 1-layer Neural Network."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## **Gradient Descent**\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-16T13:16:06.927451Z",
"start_time": "2019-01-16T13:16:06.921993Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"- Gradient descent 遍历全部数据集算一次损失函数,速度慢,;\n",
"- stochastic gradient descent 速度比较快,但收敛性能不太好。\n",
"- mini-batch gradient decent 把数据分为若干个批,按批来更新参数,一批中的一组数据共同决定了本次梯度的方向,减少了随机性。\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"**Understanding the Mathematics behind Gradient Descent**\n",
"\n",
"A simple mathematical intuition behind one of the commonly used optimisation algorithms in Machine Learning.\n",
"\n",
"https://www.douban.com/note/713353797/"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"## [Linear neural networks: A simple case](https://www.python-course.eu/neural_networks_backpropagation.php)\n",
"\n",
"\n",
"the output signal is created by \n",
"- summing up all the weighted input. \n",
"- No activation function will be applied.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-16T13:24:47.324282Z",
"start_time": "2019-01-16T13:24:47.320063Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"The error is the difference between the target and the actual output:\n",
"$e_i = \\frac{1}{2} ( t_i - o_i ) ^ 2$\n",
"\n",
"- $e_1 = t_1 - o_1 = 1 - 0.92 = 0.08$\n",
"\n",
"Depending on this error, we have to change the weights accordingly. \n",
"- we can calculate the fraction of the error e1 in w11 as:\n",
"- $e_1 \\cdot \\frac{w_{11}}{\\sum_{i=1}^{4} w_{i1}} = 0.08 \\cdot \\frac{0.6}{0.6 + 0.4 + 0.1 + 0.2} = 0.037$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"## Weight updated\n",
"http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"We have known that $E = \\sum_{j=1}^{n} \\frac{1}{2} (t_j - o_j)^2$\n",
"\n",
"And given $t_j$ is a constant, we have:\n",
"\n",
"$\\frac{\\partial E}{\\partial o_{j}} = t_j - o_j$\n",
"\n",
"Apply the chain rule for the differentiation:\n",
"\n",
"$\\frac{\\partial E}{\\partial w_{ij}} = \\frac{\\partial E}{\\partial o_{j}} \\cdot \\frac{\\partial o_j}{\\partial w_{ij}}$\n",
"\n",
"$\\frac{\\partial E}{\\partial w_{ij}} = (t_j - o_j) \\cdot \\frac{\\partial o_j}{\\partial w_{ij}} $\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Further, we often use the `sigmoid function` as the activation function $\\sigma(x) = \\frac{1}{1+e^{-x}}$\n",
"\n",
"Given $o_j = \\sigma(\\sum_{i=1}^{m} w_{ij}h_i)$, we have\n",
"\n",
"$\\frac{\\partial E}{\\partial w_{ij}} = (t_j - o_j) \\cdot \\frac{\\partial }{\\partial w_{ij}} \\sigma(\\sum_{i=1}^{m} w_{ij}h_i)$\n",
"\n",
"And it is easy to differentiate: $\\frac{\\partial \\sigma(x)}{\\partial x} = \\sigma(x) \\cdot (1 - \\sigma(x))$\n",
"\n",
"$\\frac{\\partial E}{\\partial w_{ij}} = (t_j - o_j) \\cdot \\sigma(\\sum_{i=1}^{m} w_{ij}h_i) \\cdot (1 - \\sigma(\\sum_{i=1}^{m} w_{ij}h_i)) \\frac{\\partial }{\\partial w_{ij}} \\sum_{i=1}^{m} w_{ij}h_i$\n",
"\n",
"$\\frac{\\partial E}{\\partial w_{ij}} = (t_j - o_j) \\cdot \\sigma(\\sum_{i=1}^{m} w_{ij}h_i) \\cdot (1 - \\sigma(\\sum_{i=1}^{m} w_{ij}h_i)) \\cdot h_i$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T10:57:02.153335Z",
"start_time": "2019-01-15T10:57:02.148752Z"
},
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Handwritten Digit Recognition\n",
"https://github.com/lingfeiwu/people2vec\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T06:09:32.966910Z",
"start_time": "2019-01-15T06:09:32.963111Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"**Each image has 8*8 = 64 pixels**\n",
"- input = 64\n",
" - [0, 0, 1, 0, ..., 0]\n",
"- batch size = 100\n",
" - split data into 100 batches. \n",
"- hidden neurons = 50\n",
"- output = 10\n",
"- using relu activation function\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"\n",
"\n",
"- Set batch_size = 100 images\n",
"- Given each image 64 pixels\n",
" - input_matrix = 100*64\n",
"- Set #neurons= 50\n",
" - w1 = 64*50\n",
" - hidden_matrix = 100*50\n",
"- Given #output = 10\n",
" - w2 = 50*10\n",
" - output = 100*10"
]
},
{
"cell_type": "code",
"execution_count": 139,
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-16T12:48:30.781155Z",
"start_time": "2019-01-16T12:48:30.755864Z"
},
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"# Author: Robert Guthrie\n",
"import torch\n",
"import torch.autograd as autograd\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"import torch.optim as optim\n",
"from torch.autograd import Variable\n",
"\n",
"import sys\n",
"import matplotlib.cm as cm\n",
"import networkx as nx\n",
"import numpy as np\n",
"import pylab as plt\n",
"import matplotlib as mpl\n",
"from collections import defaultdict\n",
"from matplotlib.collections import LineCollection\n",
"%matplotlib inline\n",
"\n",
"from sklearn import datasets\n",
"from sklearn.manifold import Isomap\n",
"from sklearn.manifold import TSNE\n",
"from sklearn.manifold import MDS\n",
"from sklearn.decomposition import PCA"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## 1. Loading data "
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T14:02:35.687553Z",
"start_time": "2019-01-15T14:02:35.673578Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"#basic functions\n",
"# softmax\n",
"def softmax(x):\n",
" e_x = np.exp(x - np.max(x)) # to avoid inf\n",
" return e_x / e_x.sum(axis=0)\n",
"\n",
"def softmaxByRow(x):\n",
" e_x = np.exp(x - x.max(axis=1, keepdims=True))\n",
" return e_x / e_x.sum(axis=1, keepdims=True)\n",
"\n",
"# flush print\n",
"def flushPrint(d):\n",
" sys.stdout.write('\\r')\n",
" sys.stdout.write(str(d))\n",
" sys.stdout.flush()"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T14:02:35.694040Z",
"start_time": "2019-01-15T14:02:35.689601Z"
},
"slideshow": {
"slide_type": "notes"
}
},
"outputs": [
{
"data": {
"text/plain": [
"inf"
]
},
"execution_count": 103,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# the limits of np.exp\n",
"np.exp(1000)"
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {
"ExecuteTime": {
"end_time": "2019-01-15T14:02:36.793003Z",
"start_time": "2019-01-15T14:02:35.696003Z"
},
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAASsAAAElCAYAAAC8rsTEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4xLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvDW2N/gAAFOJJREFUeJzt3TtsFVfXxvF9Pn8iIuISByOkXGRDkJCQLHNxD0jQRTINaQkNlMYVdIYOFxF2E8kVuIWCS0sU270dQFRICdhKQriYOE6kICFZ56ve5pvngdl4xmfW6/+vXNqaM+vMnMUwy3vvVrvdTgDQdP/T6RMAgDIoVgBCoFgBCIFiBSAEihWAEChWAEKgWAEIgWIFIASKFYAQ/jdncE9PT7uvr6/0+OXlZRn/7bffCrFt27bJsV988YWMd3V1lT6PlFKan59farfbO983LjdH5/Hjx4XY6uqqHPvZZ5/J+CeffJL1mWVzTKm6PP/5559C7JdffpFjN2/eLOP79u3L+sw683z+/LmM//7774XYpk2b5Nj9+/fLeF33bErVXU91jz59+lSO3bt375o/b2FhIS0tLbXKjM0qVn19fWlubq70+Js3b8r4hQsXCrETJ07IsVeuXJHx7u7u0ueRUkqtVmuxzLjcHJ2jR48WYn/99Zcce/nyZRkfGhrK+syyOaZUXZ4zMzOF2MmTJ+XYAwcOlD7Gu9SZ59jYmIxfvHixEPv888/l2B9//FHG67pnU6rueqp79Ntvv5Vjb9++vebPGxwcLD2W/wYCCIFiBSAEihWAEChWAELIesGeS71IT0l3F1zn8NNPP5XxGzduyPipU6dKnl29VCdvdnZWjp2enpbx3BfsdXrw4IGMHzt2rBDbvn27HLuwsFDlKa2JemGekr+vJicnC7Fz587JsfPz8zJ+/PjxkmfXOdevXy/EXGNkvfFkBSAEihWAEChWAEKgWAEIgWIFIIRKuoGu++HmFKm5Y3v27JFj3TQc95nr3Q10XbKcKSRN6ba8i5taMTAwUIi56TZuWlEnnD17VsZdB/vw4cOF2O7du+XYCF0/N/VLdQPPnz8vx+Z2d9c6d5EnKwAhUKwAhECxAhACxQpACBQrACFU0g108/oOHTok467zp6guTCeMj4/L+KVLl2R8ZWWl9LHVQn1N4zpCqsPjxjZprqO7B588eSLjqrPtun7u95C7+F6dVNcvJd3hc4vvuevsVrh1v5WyeLICEALFCkAIFCsAIVCsAIRQ6wt2N1WmimOv98tK9zLRvXzMOT839aET3Lm4BkPODifupW6TuBfvf/75ZyHmXrC7+A8//CDjdd7Ld+7ckfGRkREZP336dOljT0xMyPi1a9dKHyMHT1YAQqBYAQiBYgUgBIoVgBAoVgBCqKQb6LoZboE8xXX95ubmZPybb74pfeymcwv4dWJRPjclwnV+FNchdNMwIlD3uOvuuS26xsbGZPzKlSsffmLv4bZFc/GpqalCzN2fjlt8ca14sgIQAsUKQAgUKwAhUKwAhECxAhBCJd1AN5/KdfJu3rxZKvYubsskrI2b6+i2Fnv48GEh5rpBbvG9M2fOZI2v08WLF2VczfdzHex79+7JeCc62G5hRzcHVHX+3DHcPMK6ur48WQEIgWIFIASKFYAQKFYAQqBYAQih1m6gmwulOnmDg4NybM78wk5wnQ/VyXKrNrpOm+vM1cnNR3Tzw1TczS90+avtvFLqTDfQzXM9e/Zs6WO4rt/k5OQHndN6Uvez21Zuve9PnqwAhECxAhACxQpACBQrACFQrACE0Gq32+UHt1qvUkqL9Z1OrXrb7fbO9w3aCDmmRJ5BbIQ8y+eYU6wAoFP4byCAEChWAEKgWAEIgWIFIASKFYAQKFYAQqBYAQiBYgUghKz1rHp6etpu7SHl8ePHMv7RRx8VYjnH/RDz8/NLZf5SNjdHR+W+uroqx+7fv3/Nn5dS+RxTys/zxYsXMq5ycjunvHnzRsa7urpkvL+/X8YfPHhQW56//vqrjKucduzYIcfu2rVLxl2eTp3X8+eff5ZxdT337dtX+ri5FhYW0tLSUqvM2Kxi1dfXZ7fXUtwWPupLvX79es6pZGu1WqWmIuTm6Kjc3Y+4is9LqXyOKeXnOT4+LuMqp9u3b8uxatuulFLasmWLjE9PT8t4d3d3bXmeP39exlVObvE5d4zcLarqvJ5uuzR1Pd3ikFVwi24q/DcQQAgUKwAhUKwAhECxAhBCJbvbOAsLCzI+OztbiE1NTcmxvb29Wcdeb27HFpXj6Oho3aez7tRLY/cyPuclvTt23dwuPoprCrkX0nW+qHbc78Tdt0qrpZt1AwMDMp7zHebgyQpACBQrACFQrACEQLECEALFCkAItXYDXTdncbE4i2D79u1yrJuy05QOUk6Hz01xiMBNIVEuXbok464z1YkumXPgwAEZz5ki5u5Bl6e7x6vgfifOkSNHCjE353C9rxtPVgBCoFgBCIFiBSAEihWAEChWAEKotRvoughqEbaVlRU51nVnOjFvTHHdFjVvyuXSJFXMa3NzAB23WJ9b3K5O7jMPHjxYiLnuprs3614Nt4rPVNciZ6G+OvFkBSAEihWAEChWAEKgWAEIodYX7O7FqXpZ6xbsGhkZyfrMnGkhVXAvGdWLTffi2b3AbNILWXd9cl68u/uhzukmuXJeGqsFFlNK6enTpzLeievpXva7hfO6u7sLseHhYTnW3ROu8bDW/HmyAhACxQpACBQrACFQrACEQLECEEKt3UCniu5PU7bich0O1SlynSbX8bx//76M1zltx+XjOnlqm6YIXT/XyTp27JiMq0UW3T3ourvue+lEl9Dlr+K595vryLv8y+LJCkAIFCsAIVCsAIRAsQIQAsUKQAi1dgPv3Lkj42rbLbd9k9OUba3cYm2qw+e6Pq6r5LonnVjEz3V41LVU2zk1jbsWbks4lb+7bmqhvpT81l25936d1L3lrr3LZ61dP4cnKwAhUKwAhECxAhACxQpACBQrACHU2g2cnp6W8YmJidLHOH36tIw3ZZ6Z6waqTpHrnrhcmtLxTMmvCDo1NVWINWWbtHdx5+iuhVpB03UOh4aGZHy9V7F9F3cuam6gm9Pq7om6utU8WQEIgWIFIASKFYAQKFYAQqBYAQih1W63yw9utV6llBbrO51a9bbb7Z3vG7QRckyJPIPYCHmWzzGnWAFAp/DfQAAhUKwAhECxAhACxQpACBQrACFQrACEQLECEELWEjE9PT3tnK2uV1dXZfzZs2eF2OvXr+XYLVu2yPjevXtLn0dKKc3Pzy+V+eOz3BxzPHr0SMa7urpkfN++fVnjy+aYUn6ebpmQFy9eFGLu2rjzzlVFnm/fvpXjVT4p6fvT5eOWn9mxY4eMf/zxxzJe5/V01G/z5cuXcmx/f7+M51znhYWFtLS01CozNqtY9fX1pbm5udLj3Q2udvPIXespdweNVqtV6q97c3PM4W4md3O79cDc+LI5/udccvJ0OxVdvXq1EHPXpqp1rqrI0+1MMz4+LuPq/nT5uHXI3Npnbv2nOq+no36b7jvJvT+VwcHB0mP5byCAEChWAEKgWAEIgWIFIIRaN4xwLxTVy9rR0VE51r14d3H3metN5bi4qN+XurhrUHRiQwa3cYc6F3dtmrRhgnvB7jZBUOfuro/bEMVdt7o2WHgXd+7q2uV2Geu6b3myAhACxQpACBQrACFQrACEQLECEEIl3UDXWXFTNFRnSf2Zf0q+s6C2uW6S4eHh0mOPHDki43XNUfwQ7lxU98xNN2lSN9BN43L3leqSuXvWbSvvvpdOcNdC/d7c9Cl3T1Q1Re7/48kKQAgUKwAhUKwAhECxAhACxQpACJV0A3Pn/OTM3+vEPDjFdSVdV8XN92s619l189fU9XHHiCynk+U6ip3o7rqF86ampmRcLabozntlZUXG65rryJMVgBAoVgBCoFgBCIFiBSCESl6wN33qSxXcS2MX7+3tLcTcS/dOLL7muJepbmqJEmExwVzqRbW7bq7pstbpJh8it9mhphW5l/TOwYMHs8aXxZMVgBAoVgBCoFgBCIFiBSAEihWAECrpBuZ2s9Sf6ecuspfTnaqCy9Ft3aQWHnSLr7mtq3K7MHVyXUJ17m7xuQhdP0fl7+7N3HvFLVZXhdxFLVXH0k2rUR3vlFIaGhoqd3KZeLICEALFCkAIFCsAIVCsAIRAsQIQQq2L77ktptQCX7du3co6dpPm0ymuI6ZE6JK5+W4TExOFmMvdHcPln7NIYy7XDZudnZXx5eXlQsx1a133rBOLErrv1nWg1ffS3d0tx9bZxVR4sgIQAsUKQAgUKwAhUKwAhECxAhBCJd1Ax62MqLpCbp6V61o0nepWDgwMyLEPHz6U8SatrOk6c6rD5Tq17n5w+dTZbXLfrepU53Jz4+rsblZF/TZdd3e98+HJCkAIFCsAIVCsAIRAsQIQAsUKQAitdrtdfnCr9SqlpDeFa77edru9832DNkKOKZFnEBshz/I55hQrAOgU/hsIIASKFYAQKFYAQqBYAQiBYgUgBIoVgBAoVgBCyFoipqenp6220XbLbbx48ULGV1dXC7E3b97knErq7++X8U2bNsn4/Pz8Upk/PnM55nr27Fkh9vLlSznW5dLV1ZX1mWVzTCk/T3XNUtLX2OXploLJ/b7rzNNt6rB58+ZC7PXr13Ls1q1bZfzLL78sfR4p1ZunO3d137rjujxzLCwspKWlpVaZsVnFqq+vL83NzRXid+7ckePd2kCquLk1nZy7d+/KuPtiW61Wqb/udTnmunTpUiHmdkOZnp6W8dx1q8rmmFJ+nu4fJJWTy/Prr7+W8dw1y+rM063RpNbocuft1uFy34tTZ57u3NV9Ozk5KcdWsd7Y4OBg6bH8NxBACBQrACFQrACEQLECEEIlG0Zcu3ZNxt1W3GoB+tHRUTnWvcSromNXp5mZmULMvTBv0vbxbuOOnA0jXD7qO2kad+7qe8ndml1txpBSZ+5lt3nH4mLxnX5uI6EuPFkBCIFiBSAEihWAEChWAEKgWAEIoZJuoNsu3HWW1HjXKWlSp0xxOapOaBVbk9dNdYNSyrvGOZ3Dpjl58qSMq6kyrotX1RzIOuVcz6mpKTlWTc1Jqb48ebICEALFCkAIFCsAIVCsAIRAsQIQQiXdQMd1llTcdSea3kFy3UDFdZqaZGhoSMZ7e3tlXC286OadufzdNe5E98zdhyrP06dPy7G5iwl2guu+q/mb7jq4Y7jrv1Y8WQEIgWIFIASKFYAQKFYAQqjkBXsVi4qdOXOmilNZd27XF2X37t0yPjAwIOOXL1+WcfcSvE4HDx5c8zHctA33gr0Ti/W5JoC6Rm5aUdOniKVUzQKJ7rvKmWaXgycrACFQrACEQLECEALFCkAIFCsAIVTSDXSdBdctUVMXnLo6C1VxC5Apw8PDWcd24+vsBrruptsqTXWPXHfP3Q8RpiGpPN15R9hyrAqug+9+E2udhsOTFYAQKFYAQqBYAQiBYgUgBIoVgBAq6Qa6DpLajiolvWiZmx/XlK6f4zo/OR0uN7dyYmJCxutckNB1dl2HR3X43P2Q0zmtmztHN59VjW/6wpDv4vLPWUzy6dOnMu66/er7evv2benP48kKQAgUKwAhUKwAhECxAhACxQpACJV0A10HwW1VtLKyUojVtX1P3Vy3UnX4XDfMdf3cHMBObFHlqK7S0aNH1/9EMrmup+uSqZzcdYvAbRc2MjJS+hiug+/uW/Wdd3V1lf48nqwAhECxAhACxQpACBQrACFQrACE0Gq32+UHt1qvUkqL9Z1OrXrb7fbO9w3aCDmmRJ5BbIQ8y+eYU6wAoFP4byCAEChWAEKgWAEIgWIFIASKFYAQKFYAQqBYAQgha4mYnp6eds7yJP/++6+Mq4XjN23aJMdu3bpVxnft2lX6PFJKaX5+fqnMH5/l5uiohfAfPXqUdYz+/n4Zd99V2RxTys/z2bNnMv7HH38UYl999ZUc65ZlyVVFnqurq3L88+fPZfzvv/8uxNz97ZY92bNnj4xv27ZNxuu8njkeP34s47t375Zxd38qCwsLaWlpqVVmbFax6uvrS3Nzc6XHu3Wu1I4o7ot2ayO5HWGcVqtV6q97c3N0VEF2F9e5e/eujLvvqmyO/zlGTp5uLa7Lly8XYt99950c69Y5ylVFnsvLy3L82NiYjN+7d68Q++mnn+RY9w/s999/L+PHjx+X8TqvZw73G3RrYuUUzcHBwdJj+W8ggBAoVgBCoFgBCIFiBSCESjaMcNxL2YcPH5aKpeS3onbbszdlM4WoW4u7DRPchh7qpbm7Nk1a4ePJkycyPj8/L+MnTpwoFUtJv4xPKaULFy5kfWYnqJfm7l6uqrtbFk9WAEKgWAEIgWIFIASKFYAQKFYAQqikGzgzMyPjrpM3PDxciLnOoduefb25LpmbUuTyUY4cOSLjnehsug6Puw6qe+TGuu+qE9f48OHDMu46eYrrKN64cUPGz507V/rYdXO/2TNnzhRiV69elWPHx8dlPOfez8GTFYAQKFYAQqBYAQiBYgUgBIoVgBBqnRvouC6CsrjYjB2x3UJjIyMj63siHeIWO1RdUjfPsynzNj+E6vy5FVEPHTok42fPnq30nNbCXU/VqXdjWy29wKe7zmrRzRw8WQEIgWIFIASKFYAQKFYAQqjkBbvb/cJRL2XdNA83FcW98K7rT/3dS0aXu2oiTE1NybERFupzU2LU9+Ku2Xov1lYltY2W263o4sWLMt7d3V3pOZXhfieuCaKus1tM0Vnri3SHJysAIVCsAIRAsQIQAsUKQAgUKwAh1DrdZvv27TKuOnZuCo5b9K4pUzdclyzn/JqSy7u4hfNUt8l1Pf/buK243JZbp06dqvN0JNeZc53JW7duFWJN6VbzZAUgBIoVgBAoVgBCoFgBCIFiBSCEWruBrhNx+/bt0sdw3cDc+UrrLafDNzs7K+OuC9OJ7mHO933//v2suDt2J7boGhsbk/Hl5eVCzG255e7ZJhkaGiodd/ML1bZddeLJCkAIFCsAIVCsAIRAsQIQAsUKQAi1dgPd6ppqntnMzIwc6zoRTV91Uq0g6lbQdPPumtQNdNdSdXbdtXRcdzj3OFW4cuWKjKsO3/Hjx+XYycnJSs+p09xvcHR0dF3PgycrACFQrACEQLECEALFCkAIFCsAIbTa7Xb5wa3Wq5TSYn2nU6vedru9832DNkKOKZFnEBshz/I55hQrAOgU/hsIIASKFYAQKFYAQqBYAQiBYgUgBIoVgBAoVgBCoFgBCIFiBSCE/wOpbJjR0HqvRgAAAABJRU5ErkJggg==\n",
"text/plain": [
"