{ "cells": [ { "cell_type": "markdown", "id": "5e92ec80", "metadata": {}, "source": [ "# Deep Learning methods and Neural Networks\n", "\n", "\n", "**Universal approximation theorem**: A continuous function can be approximated to an arbitrary accuracy if one has at least one hidden layer with finite number of neurons in neural network. The non-linear/activation function can be sigmoid (fermi) [Cybenko in 1989], or just general nonpolynomial bounded activation function [Leshno in 1993 and Pinkus in 1999].\n", "\n", "The multilayer architecture of NN gives neural networks the potential of being universal approximators.\n", "\n", "Given a function $y=F(x)$ with $x\\in [0,1]^d$ and $f(z)$ is a non-linear bounded activation function and $\\epsilon>0$ is chosen accuracy, there is a one layer NN with $w\\in \\mathbb{R}^{m\\times n}$ and $b\\in \\mathbb{R}^n$ and $x\\in\\mathbb{R}^m$ and $z_j=\\sum w_{ij} x_i + b_j$ so that $|\\sum_i w^{(2)}_{ij} f(z_i)+b_j-F(x_j)|<\\epsilon$." ] }, { "cell_type": "markdown", "id": "1d5a5b90", "metadata": {}, "source": [ "Conceptually, it is helpful to divide neural networks into four\n", "categories:\n", "1. general purpose neural networks for supervised learning,\n", "\n", "2. neural networks designed specifically for image processing, the most prominent example of this class being Convolutional Neural Networks (CNNs),\n", "\n", "3. neural networks for sequential data such as Recurrent Neural Networks (RNNs), and\n", "\n", "4. neural networks for unsupervised learning such as Deep Boltzmann Machines.\n", "\n", "In natural science, DNNs and CNNs have already found numerous\n", "applications. In statistical physics, they have been applied to detect\n", "phase transitions in 2D Ising and Potts models, lattice gauge\n", "theories, and different phases of polymers, or solving the\n", "Navier-Stokes equation in weather forecasting. Deep learning has also\n", "found interesting applications in quantum physics. Various quantum\n", "phase transitions can be detected and studied using DNNs and CNNs,\n", "topological phases, and even non-equilibrium many-body\n", "localization. Representing quantum states as DNNs quantum state\n", "tomography are among some of the impressive achievements to reveal the\n", "potential of DNNs to facilitate the study of quantum systems.\n", "\n", "\n", "
Figure: Sketch of the neural network, the input layer is on the left ($x_i=a_i^{(0)}$) and the output layer ($a_i^{(L)}$). The latter is compared through the cost function with the target $t_i$. The layers between 0 and $L$ are called hidden layers, which increase flexibility of the network. $f(z)$ is nonlinear activation function.
\n", "\n", "An artificial neural network (ANN), is a computational model that\n", "consists of layers of connected neurons (sometimes called nodes or units). \n", "\n", "The equations are sketched in the figure, and in matrix form read:\n", "\\begin{eqnarray}\n", "\\textbf{z}^l &=& (\\textbf{a}^{l-1}) \\textbf{w}^l + \\textbf{b}^l\\\\\n", "a_i^l &=& f(z_i^l)\n", "\\end{eqnarray}\n", "Here $l$ in $a_i^l,z_i^l$ stands for the layer $l$. Using input parameters $a^{l-1}$ in layer $l$ we get output $z^l$, which are than passed through a non-linear activation function $f$ to obtain $a^l$. This in turn allowes one to calculate the next layer. Note that we used many yet to be determined weights $w$ and $b$, which are determined so that they best fit the known data, i.e., on input $x_i$ give as good approximation to target $t_i$ as possible.\n", "\n", "\n", "We start with input $x_i$, which defines the first layer $a^{0}$, and we end with output layer $a^L$, which delivers the output, and is needed to evaluate the cost function:\n", "\\begin{eqnarray}\n", "a_i^{0}\\equiv x_i\\\\\n", "C(\\{w,b\\})&=&\\frac{1}{2}\\sum_i (a_i^L-t_i)^2\n", "\\end{eqnarray}\n", "The target $t$ is the known data we train on, which was called $y$ in the linear regression. To compare with linear regression $\\widetilde{y}$ is the output layer $a_i^L$. \n", "\n", "\n", " \n", "\n", "NN is supposed to mimic a biological nervous system by letting each\n", "neuron interact with other neurons by sending signals in the form of\n", "mathematical functions between layers. A wide variety of different\n", "ANNs have been developed, but most of them consist of an input layer,\n", "an output layer and eventual layers in-between, called *hidden\n", "layers*. All layers can contain an arbitrary number of nodes, and each\n", "connection between two nodes is associated with a weight variable $w_{ij}$ and $b_i$.\n", "\n", "\n", "Withouth the nonlinear activation function NN would be equivalent to the linear regression (convince yourself). The added nonlinearity through activation function $f$ is thus crucial for the success of NN. Many choices of activation functions are in use. We mention just a few:\n", "* sigmoid (fermi) $f(z)=1/(e^{-z}+1)$\n", "* rectified linear unit (Relu) $f(z)=max(0,z)$\n", "* $\\tanh(z)$, which is related to fermi by $\\tanh(z/2)=f(-z)-f(z)$\n", "* Exponential linear unit (Elu): $f(z)= \\textrm{if}(z<0) ( \\alpha(e^{t z}-1 )\\textrm{else}(z)$ with $z\\ll 1$\n", "* Leaky Relu : $f(z)=\\textrm{if}(z<0) (\\alpha z )\\textrm{else}(z)$ with $z\\ll 1$" ] }, { "cell_type": "markdown", "id": "5fdb2ee8", "metadata": {}, "source": [ "### Simple example OR and XOR gate\n", "\n", "As we will show, the OR gate can be easily fit with linear regression, however, XOR gate can not be, and requires at least one hidden layer. \n", "\n", "Figure: OR and XOR gate with line that can or can not describe it.
\n", "\n", "The OR gate \n", "\\begin{equation}\n", "\\begin{array}{c|c|c}\n", "x_1 & x_2 & t\\\\\n", "\\hline\n", "0 & 0 & 0\\\\\n", "0 & 1 & 1\\\\\n", "1 & 0 & 1\\\\\n", "1 & 1 & 1\n", "\\end{array}\n", "\\end{equation}\n", "and XOR gate\n", "\\begin{equation}\n", "\\begin{array}{c|c|c}\n", "x_1 & x_2 & t\\\\\n", "\\hline\n", "0 & 0 & 0\\\\\n", "0 & 1 & 1\\\\\n", "1 & 0 & 1\\\\\n", "1 & 1 & 0\n", "\\end{array}\n", "\\end{equation}\n", "\n", "Let's try linear regression. The design matrix should contain a constant and linear term, i.e., $X^T=[1,x_1,x_2]$, which is \n", "\n", "\\begin{equation}\n", "X=\\begin{bmatrix} \n", "1& 0 & 0 \\\\\n", "1& 0 & 1 \\\\\n", "1& 1 & 0 \\\\\n", "1& 1 & 1\n", "\\end{bmatrix}\n", "\\end{equation}\n", "and linear regression gives $\\widetilde{y} = X \\beta = X (X^T X)^{-1} X^T y$. \n", "\n", "It is easy to check that for $y^T_{OR}=[0,1,1,1]$ we get \n", "$X(X^T X)^{-1} X^T y_{OR} =[1/4,3/4,3/4,5/4]$ while for $y^T_{XOR}=[0,1,1,0]$ we get $X(X^T X)^{-1} X^T=[1/2,1/2,1/2,1/2]$. If we assume that $\\widetilde{y}_i<1/2$ means 0 and $\\widetilde{y}_i>1/2$ is 1, we reproduce OR get, but clearly fail at XOR.\n", "\n", "As we will show below, one hidden layer can easily give XOR gate. A small technicality first: In the linear regression we wanted to have a constant allowed in the fit, hence our $X^T$ started with unity (to allow $\\beta_0$ as constat). In ML we always add constant explicitely as an additional degree of freedom (see equations above), hence $X^T$ doe not need to have unity, and it will just be $X^T=[x_1,x_2]$. More precisely \n", "\\begin{equation}\n", "X=\\begin{bmatrix} \n", "0 & 0 \\\\\n", "0 & 1 \\\\\n", "1 & 0 \\\\\n", "1 & 1\n", "\\end{bmatrix}\n", "\\end{equation}\n", "For the activation function $f(z)$ we will choose **Relu**: $f(z)=max(z,0)$.\n", "\n", "We will choose two neurons in the hidden layer, hence $w_h$ is $2x2$ matrix and $b_h$ is two component vector, in terms of which $\\textbf{z}^h=\\textbf{X}\\textbf{w}^h+\\textbf{b}^h$, $\\textbf{a}^h=f(\\textbf{z}^h)$, and the output $\\textbf{y}\\equiv \\textbf{a}^o=\\textbf{a}^{(h)}\\textbf{w}^o+\\textbf{b}^o$\n", "\n", "The minimization would give the following weights\n", "\\begin{eqnarray}\n", "&& \\textbf{w}^h=\\begin{bmatrix} \n", "1 & 1 \\\\\n", "1 & 1 \n", "\\end{bmatrix}\\\\\n", "&& \\textbf{b}^h=\\begin{bmatrix} \n", "0 & -1 \n", "\\end{bmatrix}\\\\\n", "&& \\textbf{w}^o=\\begin{bmatrix} \n", "1 \\\\\n", "-2 \n", "\\end{bmatrix}\\\\\n", "&& \\textbf{b}^o=0\n", "\\end{eqnarray}\n", "\n", "Which means that \n", "\\begin{eqnarray}\n", "\\textbf{z}^h=\n", "\\begin{bmatrix} \n", "0 & -1 \\\\\n", "1 & 0\\\\\n", "1 & 0\\\\\n", "2 & 1\n", "\\end{bmatrix}\\\\\n", "\\textbf{a}^h=\n", "\\begin{bmatrix} \n", "0 & 0 \\\\\n", "1 & 0\\\\\n", "1 & 0\\\\\n", "2 & 1\n", "\\end{bmatrix}\n", "\\end{eqnarray}\n", "and finally\n", "\\begin{eqnarray}\n", "\\textbf{a}^h \\textbf{w}^o=\n", "\\begin{bmatrix} \n", "0 \\\\ 1 \\\\ 1 \\\\ 0\n", "\\end{bmatrix}\n", "\\end{eqnarray}\n", "which is identical to target $t$ for XOR gate, and concludes our example." ] }, { "cell_type": "markdown", "id": "af4ac7f7", "metadata": {}, "source": [ "To solve NN problem we usually distinguish between the following steps: \n", "0) Randomly initialize weight and biases\n", "\n", "1) The feed forward stage, which calculates all $\\textbf{a}^l$ and $z_l$, including the output $\\textbf{a}^L$ to be used in the cost function, and compares with the target $\\textbf{t}$.\n", "\n", "2) Back propagation stage follows in which one calculates the gradients of weights $\\textbf{w}$ and biases $\\textbf{b}$ ($\\partial C/\\partial \\textbf{w}$ and $\\partial C/\\partial \\textbf{b}$). Using minimization algorithm (including stochastic approach), we move towards a minimum, which is hopefully close or equal to global minimum.\n", "\n", "3) We repeat the two steps (1) and (2) until the error of the cost function is acceptable and model is train well enough." ] }, { "cell_type": "markdown", "id": "0a4fde56", "metadata": {}, "source": [ "## Back propagation and automatic differentiation\n", "\n", "It is convenient to differentiate from the end of the NN towards the start, hence we call this back propagation. We start with differentiation the cost function \n", "$$C(\\{w,b\\})=\\frac{1}{2}\\sum_i (a_i^L-t_i)^2,$$ which gives\n", "\\begin{equation}\n", "\\frac{\\partial C}{\\partial w_{jk}^L}=\\sum_i (a_i^L-t_i) \\frac{\\partial a_i^L}{w_{jk}^L}\\\\\n", "\\frac{\\partial C}{\\partial b_{j}^L}=\\sum_i (a_i^L-t_i) \\frac{\\partial a_i^L}{b_{j}^L}\n", "\\end{equation}\n", "because $a_i^L=f(z_i^L)$, we have\n", "\\begin{equation}\n", "\\frac{\\partial C}{\\partial w_{kj}^L}=\\sum_i (a_i^L-t_i) f'(z_i^L) \\frac{\\partial z_i^L}{w_{kj}^L}\n", "\\\\\n", "\\frac{\\partial C}{\\partial b_{j}^L}=\\sum_i (a_i^L-t_i) f'(z_i^L) \\frac{\\partial z_i^L}{b_{j}^L}\n", "\\end{equation}\n", "finally $z_i^L = \\sum_j a_j^{L-1} w_{ji}+b_i$, hence\n", "\\begin{eqnarray}\n", "&&\\frac{\\partial z_i^L}{w_{kj}^L}=a_k^{L-1} \\delta_{ij}\\\\\n", "&&\\frac{\\partial z_i^L}{b_{j}^L} = \\delta_{ij}\n", "\\end{eqnarray}\n", "which finally gives\n", "\\begin{eqnarray}\n", "&&\\frac{\\partial C}{\\partial w_{kj}^L}=(a_j^L-t_j) f'(z_j^L) a_k^{L-1}\n", "\\\\\n", "&&\\frac{\\partial C}{\\partial b_{j}^L}=(a_j^L-t_j) f'(z_j^L)\n", "\\end{eqnarray}" ] }, { "cell_type": "markdown", "id": "107bed54", "metadata": {}, "source": [ "Next we define the quatity $$\\delta_j^L\\equiv (a_j^L-t_j) f'(z_j^L)$$ in terms of which we can express\n", "\\begin{eqnarray}\n", "&&\\frac{\\partial C}{\\partial w_{kj}^L}=\\delta_j^L a_k^{L-1}\n", "\\\\\n", "&&\\frac{\\partial C}{\\partial b_{j}^L}=\\delta_j^L\n", "\\end{eqnarray}\n", "Note that $\\delta_j^L$ can also be viewed as $$\\delta_j^L=\\frac{\\partial C}{\\partial a_j^L}\\frac{\\partial a_j^L}{\\partial z_j^L}=\\frac{\\partial C}{\\partial z_j^L}$$\n", "\n", "\n", "We then proceed to previous layer, and obtain\n", "\\begin{eqnarray}\n", "\\frac{\\partial C}{\\partial w_{kj}^{L-1}}=\\sum_{i,n} \\frac{\\partial C}{\\partial a_n^{L}}\\frac{\\partial a_n^L}{\\partial z_n^L}\\frac{\\partial z_n^L}{\\partial a_i^{L-1}}\\frac{\\partial a_i^{L-1}}{\\partial z_i^{L-1}}\n", "\\frac{\\partial z_i^{L-1}}{\\partial w_{kj}^{L-1}}\n", "\\end{eqnarray}\n", "We then note that $$\\frac{\\partial C}{\\partial a_n^{L}}\\frac{\\partial a_n^L}{\\partial z_n^L}=\\delta^L_n$$\n", "and because\n", "$z_n^L=\\sum_i a_i^{L-1} w^L_{in} + b_n^L$ we have\n", "$$\\frac{\\partial z_n^L}{\\partial a_i^{L-1}}=w_{in}^L$$\n", "furthermore\n", "$$\\frac{\\partial a_i^{L-1}}{\\partial z_i^{L-1}}=f'(z_i^{L-1})$$\n", "and further\n", "$z_i^{L-1}=\\sum_k a_k^{L-2} w^{L-1}_{ki} + b_i^{L-1}$ so that\n", "$$\\frac{\\partial z_i^{L-1}}{\\partial w_{kj}^{L-1}}=a_K^{L-2}\\delta_{ij}$$\n", "so that collecting all of that leads to\n", "$$\n", "\\frac{\\partial C}{\\partial w_{kj}^{L-1}}=\\sum_{i,n}\\delta_n^L w_{in}^L f'(z_i^{L-1})\\delta_{ij}a_k^{L-2}=\n", "\\sum_n\\delta_n^L w_{jn}^L f'(z_j^{L-1})a_k^{L-2}\n", "$$\n", "Now we will require that \n", "\\begin{eqnarray}\n", "\\frac{\\partial C}{\\partial w_{kj}^{l}}=\\delta_j^l a_k^{l-1}\n", "\\end{eqnarray}\n", "which gives us the following expression\n", "\\begin{eqnarray}\n", "\\delta_j^{L-1}=\\sum_n\\delta_n^L w_{jn}^L f'(z_j^{L-1})\n", "\\end{eqnarray}\n", "We can verify that this equation connects every layer with the previous layer, i.e., this equation is valid for every $l$, not just $L-1$. \n", "Similarly we can show that the derivative with respect to $b$ has the same form, namely, \n", "\\begin{eqnarray}\n", "\\frac{\\partial C}{\\partial b_{j}^{l}}=\\delta_j^l \n", "\\end{eqnarray}\n", "\n", "In conclusion, we just showed that the automatic differentiation in back propagation leads to the following set of equations\n", "\n", "\\begin{eqnarray}\n", "\\frac{\\partial C}{\\partial w_{kj}^{l}}&=&\\delta_j^l a_k^{l-1}\\\\\n", "\\frac{\\partial C}{\\partial b_{j}^{l}}&=&\\delta_j^l\n", "\\end{eqnarray}\n", "in which $\\delta_j^l$ can be all obtained by the following recursion relation\n", "\\begin{eqnarray}\n", "\\delta_j^{l}&=&\\sum_n\\delta_n^{l+1} w_{jn}^{l+1} f'(z_j^{l})\n", "\\end{eqnarray}\n", "and the starting condition\n", "\\begin{eqnarray}\n", "\\delta_j^{L}&=&(a_j^L-t_j) f'(z_j^L)\n", "\\end{eqnarray}" ] }, { "cell_type": "markdown", "id": "9f4a3a1d", "metadata": {}, "source": [ "### Final algorithm\n", "1) Initialize all variables to be minimized $\\{w,b\\}$ and perform the forward pass to compute all $a^l$ parameters.\n", "2) With current values of $\\{w,b\\}$ and $a^l$ we compute all gradients $\\frac{\\partial C}{\\partial w_{kj}^{l}}$ and $\\frac{\\partial C}{\\partial b_{j}^{l}}$ and using one of the available minimization routines we take a step towards more optimal variables $\\{w,b\\}$. Usually one uses some type of gradient descent method, as discussed previously\n", "$$\n", "w^{(j+1)} = w^{(j)} - \\gamma_j \\frac{\\partial C}{\\partial w^{(j)}}\n", "$$\n", "here $j$ stands for the iteration.\n", "3) We repeat (1) and (2) until we find local minima\n", "4) We change hiperparameter or change initial condistions to try finding different local minima." ] }, { "cell_type": "markdown", "id": "1fbd2e56", "metadata": {}, "source": [ "### Example code from MNIST dataset on handwritten numbers\n", "\n", "We will develop NN code to recognize the handwritten digits. The data is stored in MNIST dataset, which is included in sklearn.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "345d96ca", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "inputs = (n_inputs, pixel_width, pixel_height) = (1797, 8, 8)\n", "labels = (n_inputs) = (1797,)\n" ] } ], "source": [ "from numpy import *\n", "import matplotlib.pyplot as plt\n", "from sklearn import datasets\n", "\n", "# ensure the same random numbers appear every time\n", "random.seed(0)\n", "\n", "# display images in notebook\n", "%matplotlib inline\n", "plt.rcParams['figure.figsize'] = (12,12)\n", "\n", "\n", "# download MNIST dataset\n", "digits = datasets.load_digits()\n", "\n", "# define inputs and labels\n", "inputs = digits.images # x_i\n", "labels = digits.target # t_i\n", "\n", "print('inputs = (n_inputs, pixel_width, pixel_height) =',inputs.shape)\n", "print('labels = (n_inputs) =',labels.shape)" ] }, { "cell_type": "markdown", "id": "33b16078", "metadata": {}, "source": [ "Here we reshape images so that we have **Design matrix** composed of 64 pixels. We also print a few examples of numbers." ] }, { "cell_type": "code", "execution_count": 2, "id": "adbeef35", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "X = (n_inputs, n_features) = (1797, 64)\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAA7YAAADICAYAAADcOn20AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAQx0lEQVR4nO3dbWydZRkH8OuMjklYaEHMxla3ji46/KBdiItodB2oEUVXDNMYois6QgLCkKkhEGwHGDCaWIxODY510SUuJmSdZr4AdjUmxPBWEhaNjlBEzSYOOl+SbQwfPxhqN9Y68G7rdc7vl/QDZz3/c5+T6z7P899zOKtVVVUFAAAAJDVrphcAAAAA/wvFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgtYYptv39/VGr1eLhhx8ukler1eLTn/50kazxmb29va/6/nv37o2Pf/zjsWjRojjttNOivb09brjhhjhw4EC5RZJSvc9/b29v1Gq1CX++//3vF10r+dgD9kAjq/f5f+aZZ+LSSy+Nc889N04//fRobm6O5cuXx9e//vU4evRo0XWSj/lvHE0zvQDKePbZZ+Ntb3tbnHHGGXHbbbfFokWL4rHHHouenp4YHByMRx55JGbNapi/x6DBrFu3Lt73vve97PYrr7wynnzyyRP+GdQTe4BG9o9//CPOOOOMuOWWW2LRokVx5MiR2LVrV1x77bUxPDwc3/nOd2Z6iTBlzP9/KLZ1YmBgIA4cOBDbt2+Piy66KCIiVq1aFYcPH46bbropHn/88Vi+fPkMrxKmRmtra7S2th5z28jISOzZsycuv/zyaGlpmZmFwTSxB2hky5Yti61btx5z28UXXxx//vOfY+vWrfGNb3wj5syZM0Org6ll/v/DJbxxDh06FBs2bIiOjo5obm6Os846Ky644IIYGBiY8D7f/va34w1veEPMmTMn3vSmN53w41779u2Lq666KlpbW+PUU0+NJUuWxMaNG4t+PGD27NkREdHc3HzM7S+dzLzmNa8p9ljUp8zzfyL33HNPVFUV69atm9LHoX7YAzSyepv/iIjXve51MWvWrDjllFOm/LHIzfzXB1dsxzl8+HA899xz8dnPfjYWLlwYR44cifvvvz8+/OEPx5YtW+ITn/jEMb+/c+fOGBwcjFtvvTVOP/302LRpU3zsYx+LpqamuOyyyyLi3wO9YsWKmDVrVnzhC1+I9vb2ePDBB+P222+PkZGR2LJly6Rramtri4h//837ZLq6umLRokWxYcOG2LRpUyxevDgeffTRuPPOO+ODH/xgnHfeea/6daExZJ7/4/3zn/+M/v7+WLp0aaxcufIV3ZfGZQ/QyOph/quqihdffDH+9re/xc9+9rPo7++PDRs2RFOT010mZ/7rRNUgtmzZUkVE9dBDD530fY4ePVq98MIL1ac+9alq+fLlx/xZRFSnnXZatW/fvmN+f9myZdXSpUvHbrvqqququXPnVk8//fQx9//KV75SRUS1Z8+eYzJ7enqO+b329vaqvb39pNb7pz/9qbrggguqiBj7WbNmTXXo0KGTfcrUqUaY//F+/OMfVxFR3XHHHa/4vtQne4BG1ijzf8cdd4yd/9Rqtermm28+6ftSv8x/4/BR5OP84Ac/iHe84x0xd+7caGpqitmzZ8fmzZvj17/+9ct+96KLLop58+aN/fcpp5wSH/3oR2Pv3r3xhz/8ISIifvSjH8WqVatiwYIFcfTo0bGfiy++OCIihoaGJl3P3r17Y+/evf913c8//3ysXr06/vrXv8a2bdviF7/4RWzatCl++ctfxoc+9KGG+1Y0Xp2s83+8zZs3R1NTU3R3d7/i+9LY7AEaWfb57+7ujoceeih++tOfxuc///n48pe/HNdee+1J35/GZv7za6Br0//dvffeGx/5yEdizZo18bnPfS7mz58fTU1N8c1vfjPuueeel/3+/PnzJ7ztwIED0draGvv3748f/vCHY/8P7PH+8pe/FFn7l770pRgeHo6nn346zjnnnIiIeOc73xnLli2LCy+8MLZt2xZr164t8ljUp8zzf3zmzp074wMf+MAJ1wgTsQdoZPUw//Pnzx9bw3vf+94488wz48Ybb4xPfvKTvkCTSZn/+qDYjvO9730vlixZEtu3b49arTZ2++HDh0/4+/v27Zvwtte+9rUREXH22WfHm9/85vjiF794wowFCxb8r8uOiIjh4eFYuHDhWKl9yVvf+taIiHjiiSeKPA71K/P8j/fd7343jhw54gtzeMXsARpZvcz/eCtWrIiIiN/+9rcNc2LPq2P+64NiO06tVotTTz31mIHet2/fhN+I9sADD8T+/fvHPorw4osvxvbt26O9vX3sn1245JJLYteuXdHe3h5nnnnmlK19wYIF8cADD8Qf//jHWLhw4djtDz74YETEy/4ZCDhe5vkfb/PmzbFgwYKxj/rAybIHaGT1Mv/jDQ4ORkTE0qVLp/2xycX814eGK7Y///nPT/jtYu9///vjkksuiXvvvTeuvvrquOyyy+KZZ56J2267Lc4555z43e9+97L7nH322XHhhRfGLbfcMvaNaL/5zW+O+brvW2+9Ne677754+9vfHtddd1288Y1vjEOHDsXIyEjs2rUrvvWtb01aOl8axv/2Gftrrrkmtm3bFu95z3vixhtvjNe//vXxxBNPxO233x7z5s2Lyy+//CRfIepZvc7/S371q1/Fnj174qabbmqor7fn5NkDNLJ6nf+enp7Yv39/vOtd74qFCxfG6Oho/OQnP4m777471qxZE+eff/5JvkLUM/PfAGb626umy0vfiDbRz1NPPVVVVVXdeeedVVtbWzVnzpzqvPPOq+6+++6qp6enOv6liojqmmuuqTZt2lS1t7dXs2fPrpYtW1Zt27btZY/97LPPVtddd121ZMmSavbs2dVZZ51VnX/++dXNN99c/f3vfz8m8/hvRFu8eHG1ePHik3qOjz76aHXppZdWra2t1Zw5c6pzzz23WrduXfX73//+Fb1W1J9GmP+qqqorr7yyqtVq1ZNPPnnS96Ex2AM0snqf/507d1bvfve7q3nz5lVNTU3V3LlzqxUrVlRf+9rXqhdeeOEVv17UF/PfOGpVVVWlyzIAAABMF//cDwAAAKkptgAAAKSm2AIAAJCaYgsAAEBqii0AAACpKbYAAACkptgCAACQWtNML6CU/v7+onm9vb1F81paWormRUT09fUVzevs7Cyax/TZvXt30bzS+2nHjh1F8yIiDh48WDRvcHCwaJ79NL0GBgaK5q1fv75o3lQove/b2tqK5jGxkZGRonmlzwdKHwNKv19HRDQ3NxfNGx4eLppnP01udHS0aN7/+x4o/XwjzOyJuGILAABAaootAAAAqSm2AAAApKbYAgAAkJpiCwAAQGqKLQAAAKkptgAAAKSm2AIAAJCaYgsAAEBqii0AAACpKbYAAACkptgCAACQmmILAABAaootAAAAqSm2AAAApKbYAgAAkJpiCwAAQGqKLQAAAKk1zdQD7969u2jeFVdcUTRv9erVRfNaWlqK5kVEdHV1Fc0bHR0tmsf0uf7664vmlZ6F7u7uonkREXfddVfRvKnYo0xsZGSkaF7p98MMduzYUTSv9PsIE/t/f623bt1aNG9wcLBoXkT5Y4BzoOlV+vUu/X5Y+phSen0REf39/UXzent7i+bNBFdsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUalVVVTPxwNdff33RvJGRkaJ5O3bsKJrX2dlZNC8ioqWlpWhe6efM9Ck9/6Vna2hoqGheRMTatWuL5o2OjhbNY3r19fUVzevo6Ciat2rVqqJ5ERErV64smrd79+6iefCS0ud8ERHDw8NF88w/U2kqekDp41Tp4+hMcMUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEitaaYeuK2trWjeyMhI0bze3t6ieUNDQ0XzIiIee+yx4pnkNDo6WjSv9H7q6ekpmhcR0dLSUjSv9HMu/R7H5Lq7u4vmlT4GTIXSx5XSzznDa8j06OjoKJ7Z399fNK/0cbT0MYrJlT6Gd3V1Fc2bCn19fTO9hP87rtgCAACQmmILAABAaootAAAAqSm2AAAApKbYAgAAkJpiCwAAQGqKLQAAAKkptgAAAKSm2AIAAJCaYgsAAEBqii0AAACpKbYAAACkptgCAACQmmILAABAaootAAAAqSm2AAAApKbYAgAAkJpiCwAAQGqKLQAAAKnVqqqqZnoRJXR0dBTNe/zxx4vmrV27tmheRER/f3/xTKbHwMBA0byurq6ieY2op6enaF5vb2/RvHozPDxcNK+zs7No3sGDB4vmTYXSx5XSM9vW1lY0D8YrPV+lj6N9fX1F85hcIx5TtmzZUjSvu7u7aN5McMUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACC1WlVV1UwvooSOjo6ZXsKkWlpaimeWfs59fX1F85jY7t27i+bt2LGjaN7w8HDRvJGRkaJ5EeXXOBV7lImV3gOrVq0qmlfa6tWri2eW3veQSWdn50wvYVKl3+Pqzejo6EwvYVKlzwmmYl5Ln1tNxbnadHPFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABIrWmmF1BKS0tL0bzOzs6ieb29vUXzIso/59JrLL2+elJ6vg4ePFg0r7+/v2heV1dX0bwI85Vd6T2wfv36onl33XVX0bwrrriiaB6MNzAwUDRv8eLFRfOGh4eL5k1F5lScpzGxoaGhonk9PT1F8zZu3Fg0r7u7u2heRPnjyujoaNG8mThPc8UWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEitaaYXUMpnPvOZonldXV1F8zZu3Fg0LyJi9erVRfNaWlqK5jF9nn/++aJ5Bw8eLJrX3d1dNA+m2lve8paieaXfr2G8r371q0XzhoaGiuY1NzcXzYsof1xxnJpeK1euLJrX2dlZNK/0nhodHS2aFxGxfv36onn10ANcsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUlNsAQAASE2xBQAAIDXFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1BRbAAAAUqtVVVXN9CIAAADg1XLFFgAAgNQUWwAAAFJTbAEAAEhNsQUAACA1xRYAAIDUFFsAAABSU2wBAABITbEFAAAgNcUWAACA1P4F8SqxxH5w1EMAAAAASUVORK5CYII=", "text/plain": [ "