{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Additional Strategies for Confronting the Partition Function\n", "\n", "In the [previous post](https://cavaunpeu.github.io/2018/10/20/thorough-introduction-to-boltzmann-machines/) we introduced Boltzmann machines and the infeasibility of computing the gradient of its log-partition function $\\nabla_{\\theta}\\log{Z}$. To this end, we explored one strategy for its approximation: Gibbs sampling. Gibbs sampling is a viable alternative because the expression for this gradient simplifies to an expectation over the model distribution, which can be approximated with Monte Carlo samples.\n", "\n", "In this post, we'll highlight the imperfections of even this approach, then present more preferable alternatives.\n", "\n", "## Pitfalls of Gibbs sampling\n", "\n", "To refresh, the two gradients we seek to compute in a reasonable amount of time are:\n", "\n", "$$\n", "\\nabla_{w_{i, j}}\\log{Z} = \\mathop{\\mathbb{E}}_{x \\sim p_{\\text{model}}} [x_i x_j]\\\\\n", "\\nabla_{b_{i}}\\log{Z} = \\mathop{\\mathbb{E}}_{x \\sim p_{\\text{model}}} [x_i]\n", "$$\n", "\n", "Via Gibbs sampling, we approximate each by:\n", "\n", "1. Burning in a Markov chain w.r.t. our model, then selecting $n$ samples from this chain\n", "2. Evaluating both functions ($x_i x_j$, and $x_i$) at these samples\n", "3. Taking the average of each\n", "\n", "Concretely:\n", "\n", "$$\n", "\\nabla_{w_{i, j}}\\log{Z} \\approx \\frac{1}{N}\\sum\\limits_{k=1}^N x^{(k)}_i x^{(k)}_j\\quad\\text{where}\\quad x^{(k)} \\sim p_{\\text{model}}\\\\\n", "\\nabla_{b_{i}}\\log{Z} \\approx \\frac{1}{N}\\sum\\limits_{k=1}^N x^{(k)}_i\\quad\\text{where}\\quad x^{(k)} \\sim p_{\\text{model}}\n", "$$\n", "\n", "**We perform this sampling process at each gradient step.**\n", "\n", "### The cost of burning in each chain\n", "\n", "Initializing a Markov chain at a random sample incurs a \"burn-in\" process which comes at non-trivial cost. If paying this cost at each gradient step, it begins to add up. How can we do better?\n", "\n", "**In the remainder of the post, we'll explore two new directives for approximating the negative phase more cheaply, and the algorithms they birth.**\n", "\n", "## Directive \\#1: Cheapen the burn-in process\n", "\n", "## Stochastic maximum likelihood\n", "\n", "SML assumes the premise: let's initialize our chain at a point already close to the model's true distributionâ€”reducing or perhaps eliminating the cost of burn-in altogether. **This given, at what sample do we initialize the chain?**\n", "\n", "In SML, we simply initialize at the terminal value of the previous chain (i.e. the one we manufactured to compute the gradients of the previous mini-batch). **As long as the model has not changed significantly since, i.e. as long as the previous parameter update (gradient step) was not too large, this sample should exist in a region of high probability under the current model.**\n", "\n", "In code, this might look like:\n", "\n", "python\n", "n_obs, dim = X.shape # X holds all of our observations\n", "\n", "# Vanilla Gibbs sampling\n", "samples = [np.zeros(dim)]\n", "\n", "# SML\n", "samples = [previous_samples[-1]]\n", "\n", "\n", "### Implications\n", "Per the expression for the full log-likelihood gradient, e.g. $\\nabla_{w_{i, j}}\\log{\\mathcal{L}} = \\mathop{\\mathbb{E}}_{x \\sim p_{\\text{data}}} [x_i x_j] - \\mathop{\\mathbb{E}}_{x \\sim p_{\\text{model}}} [x_i x_j]$, the negative phase works to \"reduce the probability of the points in which the model strongly, yet wrongly, believes\".[^1] Since we approximate this term at each parameter update with samples *roughly from* the current model's true distribution, **we do not encroach on this foundational task.**\n", "\n", "## Contrastive divergence\n", "\n", "Alternatively, in the contrastive divergence algorithm, we initialize the chain at each gradient step with a sample from the data distribution.\n", "\n", "### Implications\n", "\n", "With no guarantee that the data distribution resembles the model distribution, we may systematically fail to sample, and thereafter \"suppress,\" points that are incorrectly likely under the latter (as they do not appear in the former!). **This incurs the growth of \"spurious modes\"** in our model, aptly named.[^1]\n", "\n", "In code, this might look like:\n", "\n", "python\n", "# Vanilla Gibbs sampling\n", "samples = [np.zeros(dim)]\n", "\n", "# SML\n", "samples = [previous_samples[-1]]\n", "\n", "# Contrastive divergence\n", "samples = [X[np.random.choice(n_obs)]]\n", "\n", "\n", "Cheapening the burn-in phase indeed gives us a more efficient training routine. Moving forward, what are some even more aggressive strategies we can explore?\n", "\n", "## Directive \\#2: Skip the computation of $Z$ altogether\n", "\n", "Canonically, we write the log-likelihood of our Boltzmann machine as follows:\n", "\n", "\n", "\\begin{align*}\n", "\\log{\\mathcal{L}(x)}\n", "&= \\log{\\frac{\\exp{(H(x))}}{Z}}\\\\\n", "&= \\log{\\big(\\exp{(H(x))}\\big)} - \\log{Z}\\\\\n", "&= H(x) - \\log{Z}\n", "\\end{align*}\n", "\n", "\n", "Instead, what if we simply wrote this as:\n", "\n", "$$\n", "\\log{\\mathcal{L}(x)} = H(x) - c\n", "$$\n", "\n", "or, more generally:\n", "\n", "$$\n", "\\log{p_{\\text{model}}(x)} = \\log{\\tilde{p}_{\\text{model}}(x; \\theta)} - c\n", "$$\n", "\n", "and estimated $c$ as a parameter?\n", "\n", "**Immediately, we remark that if we optimize this model with maximum likelihood, our algorithm will, trivially, make $c$ arbitrarily negative.** In other words, the quickest way to increase the thing on the left is to decrease $c$.\n", "\n", "How might we better phrase this problem?\n", "\n", "## Noise contrastive estimation\n", "\n", "Ingeniously, NCE proposes an alternative:\n", "\n", "1. Posit two distributions: the model, and a noise distribution\n", "2. Given a data point, predict from which distribution this point was generated\n", "\n", "Let's unpack this a bit.\n", "\n", "Under an (erroneous) MLE formulation, we would optimize the following objective:\n", "\n", "$$\n", "\\theta, c = \\underset{\\theta, c}{\\arg\\max}\\ \\mathbb{E}_{x \\sim p_{\\text{data}}} [\\log{p_{\\text{model}}}(x)]\n", "$$\n", "\n", "Under NCE, we're going to replace two pieces so as to perform the binary classification task described above (with 1 = \"model\", and 0 = \"noise\").\n", "\n", "First, let's swap $\\log{p_{\\text{model}}}(x)$ with $\\log{p_{\\text{joint}}}(y = 0\\vert x)$, where:\n", "\n", "$$\n", "p_{\\text{joint}}(x\\vert y) =\n", "\\begin{cases}\n", "p_{\\text{noise}}(x)\\quad y = 0\\\\\n", "p_{\\text{model}}(x)\\quad y = 1\\\\\n", "\\end{cases}\n", "$$\n", "\n", "$$\n", "p_{\\text{joint}}(x, y)\n", "= p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x) + p_{\\text{joint}}(y = 1)p_{\\text{model}}(x)\n", "$$\n", "\n", "$$\n", "p_{\\text{joint}}(y = 0\\vert x)\n", "= \\frac{p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x)}{p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x) + p_{\\text{joint}}(y = 1)p_{\\text{model}}(x)}\n", "$$\n", "\n", "Finally:\n", "\n", "$$\n", "\\theta, c = \\underset{\\theta, c}{\\arg\\max}\\ \\mathbb{E}_{x \\sim p_{\\text{data}}} [\\log{p_{\\text{joint}}(y = 0\\vert x)}]\n", "$$\n", "\n", "From here, we need to update $x \\sim p_{\\text{data}}$ to include $y$. We'll do this in two pedantic steps.\n", "\n", "First, let's write:\n", "\n", "$$\n", "\\theta, c = \\underset{\\theta, c}{\\arg\\max}\\ \\mathbb{E}_{x, y=0\\ \\sim\\ p_{\\text{noise}}} [\\log{p_{\\text{joint}}(y\\vert x)}]\n", "$$\n", "\n", "This equation:\n", "\n", "1. Builds a classifier that discriminates between samples generated from the model distribution and noise distribution **trained only on samples from the latter.** (Clearly, this will not make for an effective classifier.)\n", "2. To train this classifier, we note that the equation asks us to maximize the likelihood of the noise samples under the noise distributionâ€”where the noise distribution itself has no actual parameters we intend to train!\n", "\n", "In solution, we trivially expand our expectation to one over both noise samples, and data samples. In doing so, in predicting $\\log{p_{\\text{joint}}(y = 1\\vert x)} = 1 - \\log{p_{\\text{joint}}(y = 0\\vert x)}$, **we'll be maximizing the likelihood of the data under the model.**\n", "\n", "$$\n", "\\theta, c = \\underset{\\theta, c}{\\arg\\max}\\ \\mathbb{E}_{x, y\\ \\sim\\ p_{\\text{train}}} [\\log{p_{\\text{joint}}(y \\vert x)}]\n", "$$\n", "\n", "where:\n", "\n", "$$\n", "p_{\\text{train}}(x\\vert y) =\n", "\\begin{cases}\n", "p_{\\text{noise}}(x)\\quad y = 0\\\\\n", "p_{\\text{data}}(x)\\quad y = 1\\\\\n", "\\end{cases}\n", "$$\n", "\n", "As a final step, we'll expand our object into something more elegant:\n", "\n", "\n", "\\begin{align*}\n", "p_{\\text{joint}}(y = 0\\vert x)\n", "&= \\frac{p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x)}{p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x) + p_{\\text{joint}}(y = 1)p_{\\text{model}}(x)}\\\\\n", "&= \\frac{1}{1 + \\frac{p_{\\text{joint}}(y = 1)p_{\\text{model}}(x)}{p_{\\text{joint}}(y = 0)p_{\\text{noise}}(x)}}\\\\\n", "\\end{align*}\n", "\n", "\n", "Assuming *a priori* that $p_{\\text{joint}}(x, y)$ is $k$ times more likely to generate a noise sample, i.e. $\\frac{p_{\\text{joint}}(y = 1)}{p_{\\text{joint}}(y = 0)} = \\frac{1}{k}$:\n", "\n", "\n", "\\begin{align*}\n", "p_{\\text{joint}}(y = 0\\vert x)\n", "&= \\frac{1}{1 + \\frac{p_{\\text{model}}(x)}{p_{\\text{noise}}(x)\\cdot k}}\\\\\n", "&= \\frac{1}{1 + \\exp\\big(\\log{\\frac{p_{\\text{model}}(x)}{{p_{\\text{noise}}(x)\\cdot k}}}\\big)}\\\\\n", "&= \\sigma\\bigg(-\\log{\\frac{p_{\\text{model}}(x)}{{p_{\\text{noise}}(x)\\cdot k}}}\\bigg)\\\\\n", "&= \\sigma\\bigg(\\log{k} + \\log{p_{\\text{noise}}(x)} - \\log{p_{\\text{model}}(x)}\\bigg)\\\\\n", "p_{\\text{joint}}(y = 1\\vert x)\n", "&= 1 - \\sigma\\bigg(\\log{k} + \\log{p_{\\text{noise}}(x)} - \\log{p_{\\text{model}}(x)}\\bigg)\n", "\\end{align*}\n", "\n", "\n", "Given a joint training distribution over $(X_{\\text{data}}, y=1)$ and $(X_{\\text{noise}}, y=0)$, this is the target we'd like to maximize.\n", "\n", "### Implications\n", "\n", "For our training data, **we require the ability to sample from our noise distribution.**\n", "\n", "For our target, **we require the ability to compute the likelihood of some data under our noise distribution.**\n", "\n", "Therefore, these criterion do place practical restrictions on the types of noise distributions that we're able to consider.\n", "\n", "### Extensions\n", "\n", "We briefly alluded to the fact that our noise distribution is non-parametric. However, there is nothing stopping us from evolving this distribution and giving it trainable parameters, then updating these parameters such that it generates increasingly \"optimal\" samples.\n", "\n", "Of course, we would have to design what \"optimal\" means. One interesting approach is called [Adversarial Contrastive Estimation\n", "](https://arxiv.org/abs/1805.03642), wherein the authors adapt the noise distribution to generate increasingly \"harder negative examples, which forces the main model to learn a better representation of the data.\"[^2]\n", "\n", "## Negative sampling\n", "\n", "Negative sampling is the same as NCE except:\n", "\n", "1. We consider noise distributions whose likelihood we cannot evaluate\n", "2. To accommodate, we simply set $p_{\\text{noise}}(x) = 1$\n", "\n", "Therefore:\n", "\n", "\n", "\\begin{align*}\n", "p_{\\text{joint}}(y = 0\\vert x)\n", "&= \\frac{1}{1 + \\frac{p_{\\text{model}}(x)}{p_{\\text{noise}}(x)\\cdot k}}\\\\\n", "&= \\frac{1}{1 + \\frac{p_{\\text{model}}(x)}{ k}}\\\\\n", "&=\\sigma(-\\frac{p_{\\text{model}}(x)}{ k})\\\\\n", "&=\\sigma(\\log{k} - \\log{p_{\\text{model}}(x)})\\\\\n", "p_{\\text{joint}}(y = 1\\vert x)\n", "&= 1 - \\sigma(\\log{k} - \\log{p_{\\text{model}}(x)})\n", "\\end{align*}\n", "\n", "\n", "## In code\n", "\n", "Since I learn best by implementing things, let's play around. Below, we train Boltzmann machines via noise contrastive estimation and negative sampling.\n" ] }, { "cell_type": "code", "execution_count": 369, "metadata": {}, "outputs": [], "source": [ "from itertools import repeat\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import seaborn as sns\n", "import torch\n", "from torch.distributions.multivariate_normal import MultivariateNormal\n", "import torch.nn as nn\n", "import torchvision\n", "import torchvision.datasets as dset\n", "import torchvision.transforms as transforms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Load data\n", "\n", "For this exercise, we'll fit a Boltzmann machine to the [Fashion MNIST](https://www.kaggle.com/zalando-research/fashionmnist) dataset.\n", "\n", "The following code takes influence from a variety of posts like [this](https://jhui.github.io/2018/02/09/PyTorch-Data-loading-preprocess_torchvision/) which provide code for loading and displaying popular image datasets via [torchvision](https://pypi.org/project/torchvision/)." ] }, { "cell_type": "code", "execution_count": 370, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXYAAAB6CAYAAACr63iqAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4wLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvqOYd8AAAHylJREFUeJztnXmU3FW17z9b5oAMCYNkYNIoU0zAAGESZFC44AVFlsHrBb1o0AUoygqTLnhxqSvXAa/6EI1XMDwZRNAYEeFhHqigIAkQEhJCAlxDJJIAMjgh6Hl/VO3T30rOL9XprqruLvZnrV69+9Sv6nem+vXZ++y9j6WUCIIgCLqH1wx0BYIgCILWEg/2IAiCLiMe7EEQBF1GPNiDIAi6jHiwB0EQdBnxYA+CIOgy4sEeBEHQZfTrwW5mx5jZEjNbZmYXtKpSQRAEQd+xvgYomdkGwCPA0cAK4F7glJTSotZVLwiCIFhfNuzHe/cHlqWUHgMws+uAE4DKB/uwYcPS1ltv3Y9bBkEQvPpYuXLl0yml7Xp7fX8e7KOAJ+TvFcABa15kZlOAKQBbbbUVU6ZM6cctgyAIXn1Mmzbtd+tzfX9s7FYoW8uuk1KakVKamFKaOGzYsH7cLgiCIOgN/XmwrwDGyN+jgSf7V50gCIKgv/TnwX4vMNbMdjWzjYHJwOzWVCsIgiDoK322saeUXjGzs4BbgQ2AK1JKD63v50ybNq2vVXjVcskllxTLO9mXH/rQh7K82267ZfkXv/hFln2jfJNNNsllzz77bJb32muvLG+4YW0qfu5zn2t9ZddBqS870Y/77bdflk8++WQA3vSmN+WyBx54IMu///3vs/zKK68AMHbs2Fx20EEHZfnhhx/O8je+8Q0A5s+f36pqVzJQc3KrrbbK8mGHHZbls846K8u/+c1vAJgzZ04ue81reta0733ve7O80047ATB9+vRctnjx4iw//fTTraj2Oqnqy/WhP5unpJRuBm7udy2CIAiClhGRp0EQBF1Gv1bswdDFrMepqRSkduSRR2b5uOOOy7Kru2o2WLZsWZYvuKAnAPnKK68EYN68eblM3V132GGHLP/lL38BGk08atb56U9/muUf/OAHa9V3gw02yPI//vGPtV7vNIcccggAn/zkJ3PZpEmTiteuWLECgBEjRuSyt73tbVl++eWXs/znP/8ZAPUwW7VqVZY322yzLLvpwfsWGvv0zDPPzPILL7ywzvYMFOPGjcvyvvvum+Udd9wRaGzvggULsrxy5cosn3feeQCcffbZuUzjaZ58ssfnY+bMmUDj3NRx07n11FNPAY1ms4ceWm9rdFuIFXsQBEGXEQ/2IAiCLiNMMTQ3SwCccsopAPztb38rXvva1742y5tuuinQqAKrKUHf99e//hWAv//978X7zp7d40F61VVXraMVzakyV7hnwW9/+9tctnr16iyrKcC9Wu66665cpv23aNGiteSf/OQnuezQQw/N8kYbbZTlZ555BuhRb6HR82Pq1KlZ/uhHPwo0ej7ofXsznu3goosuyvL73vc+oHEOaP+OGdMTAvLEE7UA7uuvvz6XaZ+PGjUqyz5P1Pvl2GOPzbKOm5sm9P1qQrv11luz7CYyNWcMFJ/4xCeyrKYWnyMAjz32GND4vXGPIYDPf/7zWZ41axbQ2HY3aQHcd999WXYPrm233TaXuakMGr9Dm2++OQCTJ0/OZeqBdMMNN5Sa1xFixR4EQdBlxIM9CIKgywhTDL1T3a+55hqgcQdd1WVVrTX4wfnnP/+5ztdfeumlLKsXie7Iu7qsgT3rQ5W3yMUXXww0Bl+o+unBQwAbb7wx0GhG0c9VFdc9XDQARIOVNPDG1Vp9//Lly7OsY+R9/ZWvfCWXveMd78hyJ80vylFHHZVlb5uq7vfcc0+Wb7zxxix7n2r/a1DM+PHjs+zeHBro5eMHsN12PQkAR48eDfSYhaCxHzW45wtf+ALQaNbpNMccc8xaZQ8++GCWtS+9HTo31dNF5T/84Q8A3H777blMTarNvrs6v/V7+qc//QnoMaUBTJgwIcsHH3xwltV02QlixR4EQdBlxIqd3q3wbrrpJgB22WWX4usLFy7Msq/kdZXuK1LoWaFBz4pDN4F0VaubWb7yaDVvfvObgcaNPl3N6YrFV03aZyrrStI/T1dHusIqtVlfHz58eJZVO/JVk/bvQKErXK2vj5WukHVFrxtup556KtAYO/D8889nWTcQvVw/67nnnsuyapS+2t1iiy1yma56tU9L/a+bke1C6/OGN7wBgN/9ridDrbZd56FrjDr3dP7q3HCfd9VmqvDx0rmpdVTZ760+7+o/f+CBB2Y5VuxBEARBv4gHexAEQZcRppheMnLkSKBx07C0qQhlU4z7tq/5PjdT6KaNvq5+3e3CN490M1Pboyq5t0nbpiqyqsZbbrklUDZRQKOa7f78el/ta5X9Hv750Gjq0g3YdqNZGrXuXh/1h9Z6ffnLX85yKTbAN+agUf33ftIx8U1SaDQLeCoILVOzmI6hj4WmMrjttttoN3vssUeWfTx1M1Lbrt8L/75oe9TspSkX3HSn5hU1NaoJp2Tm0++ujrHLOq76PdY6eNZIdQhoJ7FiD4Ig6DLiwR4EQdBlhCmG3nnFuAqsKr+qw6rmuReC+s0qRxxxRJZd5VZzhqpwjz/+eNO69QX3FIAerwpVP1Xl1HKXte0qq+rsKqqq0Kr2qnnFVVjtB/UOKn2uenvss88+Wb7zzjvpFHpf7Sevu8YkXHrppVlWs5dT6g/9LL2H9pNmF9Qsjf4+NVF87GMfy/Lhhx++Vh0OOKDnPPpOmGI0JsP9+NXzTE0XarrzOad9pl4+asrya/Ra7RPtS+937X+V9Vq/n85NNQ9qHTw9RphigiAIgj4RD/YgCIIuI0wxveTFF18EGj08NIhETQyuor3lLW/JZRocoaqdf66WbbPNNlnWgxFaiQa4lII91HyiJgZXgasClErpGdQ8oGabkmqs6rTeV9Vdf5/WUc1bnTTFuLcUNJqv/JxNNX28/vWvz7IH40BP27SNVf3k/aNt1/7XTI6OpjKoShXh3iWaUbMTqOnCg6vU00hfv/fee7Ps3yedW2qa0++jo/3oplUom73U20ZNYXrtH//4R6DxIBq9h9bXTU567mo7abpiN7MrzGyVmS2UsuFmdpuZLa3/3mZdnxEEQRB0jt6s2L8L/G9Ak4FfAMxJKU03swvqf5/f+uoNHkpHh2kOdvUldl9tTWCkm6577713ln2jVFddukK49tpr+1PtSiZOnJhl3xAqbQBDOTFSVeK0qk0nR1c0iq9WqzZMddPKV5q6qtJc2+1G26DzQuvr9dGVufq0l9pTFcJfiiOoulbHwjWI173udblMNyu1Dj72O++8c/FzW4lugup4e5tUqzj99NOzrM4IpbgSnW/qe+6oNthsc1rRflIt3evgKTkAvve9763VHuiZG506K6Dpij2l9Evg2TWKTwBm1uWZwIktrlcQBEHQR/q6ebpDSmklQP339lUXmtkUM5trZnNLdq8gCIKgtbR98zSlNAOYATBy5MiBSZTdAjxrm6pdqmqpiuV+uJqlTs0yHl4MPeqyqoaf/exnW1TranbfffcsuxpelUZAVVEvL/m2Q9n8UoWafvx9JRV6zfp4OLhugO2///69vm9/Uf9vjQfQ9vjmXslHGhrr7v2u/ahqesl8VWWmKvl66700RYWaEt0cpKYYHQs1e/UXNfPpuLpjQimFAjSalDz2QlMDKNqX3n/6Ha3K2OjX6FhpfTWu5KSTTgIaN2o1u6b2n5tc1Ue/XTEq0PcV+1NmtiNA/feq1lUpCIIg6A99fbDPBk6ry6cBP25NdYIgCIL+0tQUY2bXAocD25rZCuASYDpwvZmdDiwHTm5nJQcDHu6sKnLVQQ8eSqxHfekhCvo+94VVNfHyyy9vQY3X5pJLLsmy+l+7Sqiqubaz6kg9p8r84p/RG5937xNV+fVa9SpyL4bVq1fnMvU4+da3vpXlM844Y5117wsaav/tb387y+PGjcuyH8ChR7RpP2l7PKOgemeoWad0OEnVQRBq2vD3qUlA36deIldccQXQmFmxleYXRU1ZGhei88HRmI6jjz46y1ddVXPSU799/y5B4xF2pTQB2md6rZvFtJ+qvhdervXWazWrppfrUZftNMU0fbCnlE6peOnIivIgCIJgAImUAkEQBF1GpBRYgypvA9+FVy+I+fPnZ1l3zl2dW7p0aS6rCkByFe1nP/tZ8b6tZNmyZVkunVZfdd/SISDaHlX/Vb0veXNUBWWUshaqiqyh7qtWrWqoCzSOm3omtAM9HEIPDrnmmmuyfPbZZwONAUqqmus5pT4WVWY+baf3SdV8KpnQtt++xxv5Rz/6UZa/9rWvZdm9YY4//vhcpkFsc+fOpVX8+te/zrJ+R0aMGAE0tkfPEtbMkzNmzADK5jxonHv+fdT5pPNU8Wt1Puk5pvo99/FUE5BmdHz00Uez7B5ybtJtN7FiD4Ig6DJixb4GpQ0cgIcffhho9KvVDTD1H/b/9rvuumsuU19XPwoNelYZqgm0i6uvvjrLupq48sorgcaVua4oNYzaV4FVG6a6KnLf3Sqfd12J+ufqRp9eqyssX9mpz/W0adOy/M1vfrNYt1ahG6Lvete7svz+978/y55S4NOf/nQu0z4tpV7QuacxBaXyqtgB1XhcG1QNRjf0zzrrrCx///vfX6uOixYtoh1oLvpSXnrl5ptvzvKsWbOy/Na3vhVoXC1rP+n30dM+VPnlNzuiseqYvFtvvRWAz3zmM+tsw0AQK/YgCIIuIx7sQRAEXUaYYtagKmveu9/9bqDRZFJ14rvLnmt9TVRl9GPe3vnOd+YyzRSpme6qzER94e67786ybwZOnTo1l5166qlZ1na6f2/VMWNqMnFZVV3dXFITgve7mhL0HroBeNddd61VRzVvtRv19Z4+fXqW9Yg6Dx0/+OCDc5n2o27O+Yantlc3QdWn2tG+qzpu0M1iVRvLX/rSl7Ls5qv3vOc9uezQQw/NspsdBhLNZV7aENV5pt9Hn1Pap2pSKcUB6DzVTW9F04QMNmLFHgRB0GXEgz0IgqDLCFPMOlB1zdWxW265JZep2UZNBa4aazi0mnDUT9fNNZoJ0s0+0Ohr3MrE/CV//S9+8Yu5TM0G5557bpbvv/9+oDo8u5ShsOT9Ao0mHDchaJ+qV5F6ulx44YW9ak+70JQF7p0BjR4e7hVz8cUX57KHHnooyzo3vM+qTG0lX+2q7ITa196XmgFRvaGuu+66LLtnh3qDqO/6YDDF6FFzfhSitveZZ57JsqYqcC+q0sEiUM60qX3q3l3Q+N3V9w02YsUeBEHQZcSDPQiCoMt41Zli1ITgaq2WaQi4qnkHHXQQ0BgSrAFKqua56qemHD3gQM0Nbq5Qrw4NdFFTTCspmUTU1DN+/Pgsa5v92qpzIrXcr1W1VvtaTQylgCftPz1koVl72o32h5orNHDpkUceARq9UKrC2J3SwSPQ2L/N0g9on7qs83TJkiVZXrx4cZbdFKbz0FM3DBbcgwx62q/eKzr31DvI+7U35/RuueWWQGOglnqm7bvvvlnWLJ+DjVixB0EQdBmvihV7s401zeWt//U9zBp6Nrs0hLnqDFdfLegGpK8EoHEzxlcGujrSXOnqB+3+262g2Wnp2meKX6urad1EqjoKrnTf0qpU76urRx2jgcaPvYPGvtNx9f7RFaXGNahvtGuGqslVbU77Nfq6bniWNgIV1Qo0gZn73etKdzD1OZRTTGh7dcO0lGZB+1H7RjUp/z5qsi/N+1+VEGywESv2IAiCLiMe7EEQBF3GoDLFNDMPVFHyk6462k034U488UQADjnkkFz29re/PcunnNJzeJSr1h/84AdzmfsqQ6MK62qghjhrfXxjTa8thYIDfOADH8hyK00xzfpX/awV78sqk0vJv1rHR2XFr6065m0wqb06T3WsSlkA9fWqYwFLpgJF3+f9V4oX0NcV3XTcaaedsqxj7HWoMgcNBpo5Pmjdm22mV2WC9M/V/lXTqG6cq9l2sNF05MxsjJndbmaLzewhM/t4vXy4md1mZkvrv8tPgiAIgqCj9OZf8ivAuSmlPYBJwJlmtidwATAnpTQWmFP/OwiCIBhgenOY9UpgZV1+0cwWA6OAE4DD65fNBO4Azu9PZfoaMl+VkbGEhqbvvvvuAMybNy+XffWrX83yRz7ykSy76jZmzJhcpuYTVd1cpVYVWL1edPfe1UD1sFF5woQJzRvVB5qZvbSd6jWwPuqnq8n6npKfexVar2Z+7H014/WFZp49UPaKUTOJfobPF+1n9TrScr+2yg+75N9eOnwDGudhKbtmu/txfVFzXOmIwNLhJXqNtq0qtYX3g3531aNNzWU6toON9TKimdkuwD7APcAO9Ye+P/y3r3jPFDOba2Zzq9wDgyAIgtbR6we7mW0B3Aick1J6obfvSynNSClNTClNVD/fIAiCoD30yivGzDai9lC/OqX0w3rxU2a2Y0pppZntCLQ0/vj883usOn6upJoH9CxEP7OzXlegMbDnoosuyrKeQennmO622265bOzYsVlWDxkPaFAVzc9ShEazjKtrGjyx5557ZlnDlRcsWABUn3mq4eCdRD0mNCijdOZpVeCHX1PlXaFqtH+G9qOa2DRYphTQ00mqvFC0nT5u6gGlC5tSxkY15VSlF/B7a1kpfL7qWp2/6uGhQVdOsxQInUbnmX+3qkxHJZOUzhc1dSk+Lmr20XGp+l4MNnrjFWPAd4DFKaVL5aXZwGl1+TTgx62vXhAEQbC+9GbFfjDw78ACM/Ozvy4CpgPXm9npwHLg5P5WZvLkyUX561//OgCjRo3KZZqzfL/99suy+5zqaln/s+pxar5xqatiDfv++c9/nmX/b68rG12BlULhly1blss0V7Te74ADDgBg5513zmW6ytPNrlbSbGNMfXsVX6FWbVSVklBV+bHr6tI3Vav8kjXXuYfja1kphLxdaKoIXfnpxvDChQsBWL58eS478MADs6z7Tf4+/Sz1fy/FaVTFEWi5rjQdDY/X4xFdK1ZKR/INJKVjF6s2pEtxLPod1b4u7f3ptarl6Eq+KiZjMNAbr5g7gSr3hSNbW50gCIKgvwyu0LIgCIKg3wy4LqGbiuecc06WVaV0f3I9GkvVcDWJLF26FGg8QVzNJ2ryKKnAau4ZN27cWvXVzUHN9Kgbqb4p5X7y0LjpoqYYl73e0Kjqt8uXuKS2apke1af4Rl1vjmZzWfusKnWCX6uv6xzQcj/qTedAM5/4dqHmAVXTNTbCUbNaaeNYy3ROqlxKmVHyn9fyKt9rze7oY6Sfq68PBkqbyGouUpOK+p57uc6hKscH71/t06o5W5V2YzAQK/YgCIIuIx7sQRAEXcaAm2IWLVqU5V/96ldZPuyww7LsatVxxx2Xy1QFVhXNzSuq/qtaVfK6UJVTQ//f+MY3Ztk/T19XNVA9ONyco0elqalAj8lz1U5VaFUj2+UVU/KkUJNWlfeK96W+rn2tZiQ3AWjbVa1VSh40VeaegVaBSz7o0Fh3NQU6ehRdM9ORvl7yldf5pnXQMXQThMZNqNeXeou5eUtNDYPNj10pZXdUE2dpzukc0nmqfeZtLpm0oNGEM1AxJr0hVuxBEARdRjzYgyAIuowBN8UoU6dOzbKG9p933nkA7LXXXrlMQ/BVvfeDBNQTRk0Jqoq62rX99j35y9SzoXRmpKY1UNVO6+BqXlVmP/3cUhCIqsMaUNJutO1VGQy97lqmwWDN0Lap14v3ZSlL5pqUzFOd9IqpCmNXE5GbFY866qhcdsYZZ2RZg9Bcpa9K06Aqv8/ZklfNmp/hc0vrq55lOreef/55oNHEM9gClEopBdRkot/BUtCR9pl+BzWwzNusc1r7tBQsNhiJFXsQBEGXMeAr9qowYPXr/vCHPww0/sf2MoDjjz8+y5MmTQIa/dH7uprT+vjqWzecnnzyySyr37Kf+H7SSSflMt10Lfl66wau3lePMmvlqrT0WSNGjCjWUVcm7ruvq2aVdUXuqx9tj66ONA7Ax7Yqv7musEqbVp08xk1XhlUbaA88UMu+ceSRPcHZuhmvq/uqDWWn1Dbtm2aybuLrSvaOO+7Iso+xjvtAb1KvSWmjVLUKlbVPfc7pKlz7qZSbvUpbrDq6cbARK/YgCIIuIx7sQRAEXcaAm2LWJ2ReVSY94k5lR8P5dXNU1Uv3X9Xsbepj/sQTT2RZzS695bLLLsuympbUb9bVRPU19uyQ0Bja38r0AqXPqjJ3qM+vb1qrj3MpUyH0qMOa4VCzRuoGt6vZel9VddVEUDqwpZObp25mgUZTi85PH08t03m4ZMmSLHvbq0wqqv6vTzv9MzQ1hm7o33LLLVl2M4WaKx577LFe36sTqJnP26bzsOr74fOzapNZP9c/Q78LOoY61/0ch/Hjx+ey+fPnF+9RyrTZTmLFHgRB0GXEgz0IgqDLGHBTTLvwY+/WlDvJmWeeOSD37St6MIiap1SeOHEi0GieUY+fUqoH9VBQFXf48OFZdn9kNUmpqUZDue+///616t7J8HdV6dXjpORJMWvWrKLcSariJtSk5QfUPP7447lssGV31FQbTlUKCp2fbh6s8nRpdkyhmmq0Lz0OoJQ+Ys3P6DSxYg+CIOgy4sEeBEHQZTQ1xZjZpsAvgU3q19+QUrrEzHYFrgOGA/cB/55SGlwxyEElpXBo9ZjQnX7F1dqjjz56rTJo9GTxjIGjR4/OZWrWUe+fUiZCNfEsWLBgrbqoCt3J8O4VK1ZkWcPyNVx/MFGVGkDNV27q0jmgYzUYKAWsqSdMVTvdBFPlQVMKktR5rPNbvYb82nYdhtMferNifwk4IqU0HpgAHGNmk4D/BL6SUhoL/BE4vX3VDIIgCHqLrc9/GzMbBtwJfBT4KfC6lNIrZnYg8L9SSu9Y1/tHjhyZpkyZ0p/6BkEQvOqYNm3avJTSxN5e3ysbu5ltYGYPAKuA24BHgedSSq6vrABGVb0/CIIg6By9erCnlP6RUpoAjAb2B/YoXVZ6r5lNMbO5Zja3lEozCIIgaC3r5RWTUnoOuAOYBGxtZr75OhooxtynlGaklCamlCaWQsGDIAiC1tL0wW5m25nZ1nV5M+AoYDFwO/Ce+mWnAT9uVyWDIAiC3tObyNMdgZlmtgG1fwTXp5RuMrNFwHVm9lngfuA7baxnEARB0EvWyyum3zczWw38GXi6YzftLNsSbRuKRNuGJq+mtu2cUtqu6uI16eiDHcDM5q6P285QIto2NIm2DU2ibdVESoEgCIIuIx7sQRAEXcZAPNhnDMA9O0W0bWgSbRuaRNsq6LiNPQiCIGgvYYoJgiDoMuLBHgRB0GV09MFuZseY2RIzW2ZmF3Ty3q3GzMaY2e1mttjMHjKzj9fLh5vZbWa2tP57m4Gua1+oJ36738xuqv+9q5ndU2/X981s42afMRgxs63N7AYze7g+dgd20Zh9oj4XF5rZtWa26VAdNzO7wsxWmdlCKSuOk9X4Wv258qCZ7TtwNW9ORdu+WJ+TD5rZjzzav/7ahfW2LTGzdWbQdTr2YK9Hrl4GHAvsCZxiZnt26v5t4BXg3JTSHtRy55xZb88FwJx6nvo59b+HIh+nljrC6Zb8+18Fbkkp7Q6Mp9bGIT9mZjYK+BgwMaW0N7ABMJmhO27fBY5Zo6xqnI4FxtZ/pgCXd6iOfeW7rN2224C9U0pvBh4BLgSoP1MmA3vV3/ON+rN0nXRyxb4/sCyl9Fj9pKXrgBM6eP+WklJamVK6ry6/SO0BMYpam2bWL5sJnDgwNew7ZjYaOA747/rfBhwB3FC/ZKi2a0vgrdTTX6SU/l5PbDfkx6zOhsBm9eR8w4CVDNFxSyn9Enh2jeKqcToBuCrVuJtagsIdO1PT9afUtpTS/5U06HdTS6wItbZdl1J6KaX0OLCM2rN0nXTywT4KeEL+7poc7ma2C7APcA+wQ0ppJdQe/sD2A1ezPvNfwHmAn0U2gu7Iv78bsBq4sm5m+m8z25wuGLOU0u+BLwHLqT3Qnwfm0R3j5lSNU7c9W/4D+Fld7lPbOvlgt0LZkPe1NLMtgBuBc1JKLwx0ffqLmR0PrEopzdPiwqVDcew2BPYFLk8p7UMtb9GQM7uUqNubTwB2BUYCm1MzUazJUBy3ZnTL/MTMPkXNzHu1FxUua9q2Tj7YVwBj5O/KHO5DBTPbiNpD/eqU0g/rxU+5Glj/varq/YOUg4F/NbP/oWYuO4LaCr5X+fcHOSuAFSmle+p/30DtQT/Uxwxq6bQfTymtTim9DPwQOIjuGDenapy64tliZqcBxwP/lnoCjPrUtk4+2O8FxtZ36TemtiEwu4P3byl1u/N3gMUppUvlpdnU8tPDEMxTn1K6MKU0OqW0C7Ux+n8ppX+jC/Lvp5T+ADxhZm+qFx0JLGKIj1md5cAkMxtWn5vetiE/bkLVOM0GTq17x0wCnneTzVDBzI4Bzgf+NaWkR83NBiab2SZmtiu1DeLfNv3AlFLHfoB/obbj+yjwqU7euw1tOYSaSvQg8ED951+o2aPnAEvrv4cPdF370cbDgZvq8m71CbUM+AGwyUDXr49tmgDMrY/bLGCbbhkzYBrwMLAQ+D/AJkN13IBrqe0VvExt1Xp61ThRM1dcVn+uLKDmGTTgbVjPti2jZkv3Z8k35fpP1du2BDi2N/eIlAJBEARdRkSeBkEQdBnxYA+CIOgy4sEeBEHQZcSDPQiCoMuIB3sQBEGXEQ/2IAiCLiMe7EEQBF3G/wehGW3vWPf3MwAAAABJRU5ErkJggg==\n", "text/plain": [ "