{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fitting Linear Models with Custom Loss Functions and Regularization in Python\n",
"\n",
"### by [Alex P. Miller](https://alex.miller.im/) ([@alexpmil](https://twitter.com/alexpmil))\n",
"\n",
"As part of a predictive model competition I participated in earlier this month, I found myself trying to accomplish a peculiar task. The challenge organizers were going to use \"mean absolute percentage error\" (MAPE) as their criterion for model evaluation. Since this is not a standard loss function built into most software, I decided to write my own code to train a model that would use the MAPE in its objective function. \n",
"\n",
"I started by searching through the SciKit-Learn documentation on [linear models](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model) to see if the model I needed has already been developed somewhere. I thought that the `sklearn.linear_model.RidgeCV` class would accomplish what I wanted (MAPE minimization with L2 regularization), but I could not get the `scoring` argument (which supposedly lets you pass a custom loss function to the model class) to behave as I expected it to.\n",
"\n",
"While I highly recommend searching through existing packages to see if the model you want already exists, you should (in theory) be able to use this notebook as a template for a building linear models with an arbitrary loss function and regularization scheme."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Python Code\n",
"\n",
"I'll be using a Jupyter Notebook (running Python 3) to build my model. If you're reading this on my website, you can find [the raw .ipynb file linked here](https://github.com/alexmill/website_notebooks/blob/master/custom-loss-function-regularization-python.ipynb); you can also run a fully-exectuable version of the notebook the the Binder platform by [clicking here](https://mybinder.org/v2/gh/alexmill/website_notebooks/master?filepath=custom-loss-function-regularization-python.ipynb).\n",
"\n",
"We'll start with some basic imports:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import numpy as np\n",
"import pandas as pd\n",
"from matplotlib import pyplot as plt\n",
"from sklearn.preprocessing import StandardScaler"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Your Data\n",
"\n",
"For the purposes of this walkthrough, I'll need to generate some raw data. Presumably, if you've found yourself here, you will want to substitute this step with one where you load your own data. \n",
"\n",
"I am simulating a scenario where I have 100 observations on 10 features (9 features and an intercept). The \"true\" function will simply be a linear function of these features: $y=X\\beta$. However, we want to simulate observing these data with noise. Because I'm mostly going to be focusing on the \"mean absolute percentage error\" loss function, I want my noise to be on an exponential scale, which is why I am taking exponents/logs below:\n",
"\n",
"$$y = e^{log(X\\beta) + \\varepsilon}, \\; \\varepsilon \\sim \\mathcal{N}(0, 0.2)$$"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Generate predictors\n",
"X_raw = np.random.random(100*9)\n",
"X_raw = np.reshape(X_raw, (100, 9))\n",
"\n",
"# Standardize the predictors\n",
"scaler = StandardScaler().fit(X_raw)\n",
"X = scaler.transform(X_raw)\n",
"\n",
"# Add an intercept column to the model.\n",
"X = np.abs(np.concatenate((np.ones((X.shape[0],1)), X), axis=1))\n",
"\n",
"# Define my \"true\" beta coefficients\n",
"beta = np.array([2,6,7,3,5,7,1,2,2,8])\n",
"\n",
"# Y = Xb\n",
"Y_true = np.matmul(X,beta)\n",
"\n",
"# Observed data with noise\n",
"Y = Y_true*np.exp(np.random.normal(loc=0.0, scale=0.2, size=100))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define your custom loss function\n",
"\n",
"I am mainly going to focus on the MAPE loss function in this notebook, but this is where you would substitute in your own loss function (if applicable). MAPE is defined as follows:\n",
"\n",
"### Mean Absolute Percentage Error (MAPE)\n",
"\n",
"$$ \\text{error}(\\beta) = \\frac{100}{n} \\sum_{i=1}^{n}\\left| \\frac{y_i - X_i\\beta}{y_i} \\right|$$\n",
"\n",
"While I won't go to into too much detail here, I ended up using a *weighted* MAPE criteria to fit the model I used in the data science competition. Given a set of sample weights $w_i$, you can define the weighted MAPE loss function using the following formula:\n",
"\n",
"### Weighted MAPE\n",
"\n",
"$$\\text{error}(\\beta) = 100 \\left( \\sum_{i=1}^N w_i \\right)^{-1} \\sum_{i=1}^N w_i \\left| \\frac{y_i - X_i\\beta}{y_i} \\right|$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In Python, the MAPE can be calculated with the function below:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def mean_absolute_percentage_error(y_pred, y_true, sample_weights=None):\n",
" y_true = np.array(y_true)\n",
" y_pred = np.array(y_pred)\n",
" assert len(y_true) == len(y_pred)\n",
" \n",
" if np.any(y_true==0):\n",
" print(\"Found zeroes in y_true. MAPE undefined. Removing from set...\")\n",
" idx = np.where(y_true==0)\n",
" y_true = np.delete(y_true, idx)\n",
" y_pred = np.delete(y_pred, idx)\n",
" if type(sample_weights) != type(None):\n",
" sample_weights = np.array(sample_weights)\n",
" sample_weights = np.delete(sample_weights, idx)\n",
" \n",
" if type(sample_weights) == type(None):\n",
" return(np.mean(np.abs((y_true - y_pred) / y_true)) * 100)\n",
" else:\n",
" sample_weights = np.array(sample_weights)\n",
" assert len(sample_weights) == len(y_true)\n",
" return(100/sum(sample_weights)*np.dot(\n",
" sample_weights, (np.abs((y_true - y_pred) / y_true))\n",
" ))\n",
" \n",
"loss_function = mean_absolute_percentage_error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Fitting a simple linear model with custom loss function\n",
"\n",
"You may know that the traditional method for fitting linear models, ordinary least squares, has a nice analytic solution. This means that the \"optimal\" model parameters that minimize the squared error of the model, can be calculated directly from the input data:\n",
"\n",
"$$ \\hat\\beta = \\arg\\min_\\beta \\frac{1}{n} \\sum_{i=1}^n (y_i - X_i\\beta)^2 = (X^\\mathrm{T}X)^{-1}X^\\mathrm{T}y $$\n",
"\n",
"However, with an arbitrary loss function, there is no guarantee that finding the optimal parameters can be done so easily. To keep this notebook as generalizable as possible, I'm going to be minimizing our custom loss functions using numerical optimization techniques (similar to the \"solver\" functionality in Excel). In general terms, the $\\beta$ we want to fit can be found as the solution to the following equation (where I've subsituted in the MAPE for the error function in the last line):\n",
"\n",
"$$ \\hat\\beta = \\arg\\min_\\beta \\; \\text{error}(\\beta) = \\arg\\min_\\beta \\frac{100}{n} \\sum_{i=1}^{n}\\left| \\frac{y_i - X_i\\beta}{y_i} \\right| $$\n",
"\n",
"Essentially we want to search over the space of all $\\beta$ values and find the value that minimizes our chosen error function. To get a flavor for what this looks like in Python, I'll fit a simple MAPE model below, using the `minimize` function from SciPy. "
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[ 9.08252394 5.54995839 8.75233095 1.1883712 3.29482497 5.03886496\n",
" -0.22556182 0.38830739 3.15524308 5.24599191]\n"
]
}
],
"source": [
"from scipy.optimize import minimize\n",
"\n",
"def objective_function(beta, X, Y):\n",
" error = loss_function(np.matmul(X,beta), Y)\n",
" return(error)\n",
"\n",
"# You must provide a starting point at which to initialize\n",
"# the parameter search space\n",
"beta_init = np.array([1]*X.shape[1])\n",
"result = minimize(objective_function, beta_init, args=(X,Y),\n",
" method='BFGS', options={'maxiter': 500})\n",
"\n",
"# The optimal values for the input parameters are stored\n",
"# in result.x\n",
"beta_hat = result.x\n",
"print(beta_hat)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can compare the esimated betas to the true model betas that we initialized at the beginning of this notebook:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
\n",
" \n",
"
\n",
"
\n",
"
true_beta
\n",
"
estimated_beta
\n",
"
error
\n",
"
\n",
" \n",
" \n",
"
\n",
"
0
\n",
"
2
\n",
"
9.082524
\n",
"
-7.082524
\n",
"
\n",
"
\n",
"
1
\n",
"
6
\n",
"
5.549958
\n",
"
0.450042
\n",
"
\n",
"
\n",
"
2
\n",
"
7
\n",
"
8.752331
\n",
"
-1.752331
\n",
"
\n",
"
\n",
"
3
\n",
"
3
\n",
"
1.188371
\n",
"
1.811629
\n",
"
\n",
"
\n",
"
4
\n",
"
5
\n",
"
3.294825
\n",
"
1.705175
\n",
"
\n",
"
\n",
"
5
\n",
"
7
\n",
"
5.038865
\n",
"
1.961135
\n",
"
\n",
"
\n",
"
6
\n",
"
1
\n",
"
-0.225562
\n",
"
1.225562
\n",
"
\n",
"
\n",
"
7
\n",
"
2
\n",
"
0.388307
\n",
"
1.611693
\n",
"
\n",
"
\n",
"
8
\n",
"
2
\n",
"
3.155243
\n",
"
-1.155243
\n",
"
\n",
"
\n",
"
9
\n",
"
8
\n",
"
5.245992
\n",
"
2.754008
\n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" true_beta estimated_beta error\n",
"0 2 9.082524 -7.082524\n",
"1 6 5.549958 0.450042\n",
"2 7 8.752331 -1.752331\n",
"3 3 1.188371 1.811629\n",
"4 5 3.294825 1.705175\n",
"5 7 5.038865 1.961135\n",
"6 1 -0.225562 1.225562\n",
"7 2 0.388307 1.611693\n",
"8 2 3.155243 -1.155243\n",
"9 8 5.245992 2.754008"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.DataFrame({\n",
" \"true_beta\": beta, \n",
" \"estimated_beta\": beta_hat,\n",
" \"error\": beta-beta_hat\n",
"})[[\"true_beta\", \"estimated_beta\", \"error\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's obviously not perfect, but we can see that our estimated values are at least in the ballpark from the true values. We can also calculate the final MAPE of our estimated model using our loss function:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"14.354033248368872"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loss_function(np.matmul(X,beta_hat), Y)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Incorporating Regularization into Model Fitting\n",
"\n",
"The process described above fits a simple linear model to the data provided by directly minimizing the a custom loss function (MAPE, in this case). However, in many machine learning problems, you will want to [regularize](https://en.wikipedia.org/wiki/Regularization_(mathematics)) your model parameters to prevent overfitting. In this notebook, I'm going to walk through the process of incorporating L2 regularization, which amounts to penalizing your model's parameters by the square of their magnitude. \n",
"\n",
"In precise terms, rather than minimizing our loss function directly, we will augment our loss function by adding a squared penalty term on our model's coefficients. With L2 regularization, our new loss function becomes:\n",
"\n",
"$$ L(\\beta) = \\frac{100}{N} \\sum_{i=1}^N \\left| \\frac{y_i - X_i\\beta}{y_i} \\right| + \\lambda \\sum_{k=1}^K \\beta_k^2 $$\n",
"\n",
"Or, in the case that sample weights are provided:\n",
"\n",
"$$ L(\\beta) = 100 \\left( \\sum_{i=1}^N w_i \\right)^{-1} \\sum_{i=1}^N w_i \\left| \\frac{y_i - X_i\\beta}{y_i} \\right| + \\lambda \\sum_{k=1}^K \\beta_k^2 $$\n",
"\n",
"For now, we will assume that the $\\lambda$ coefficient (the regularization parameter) is already known. However, later we will use cross validation to find the optimal $\\lambda$ value for our data.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since our model is getting a little more complicated, I'm going to define a Python class with a very similar attribute and method scheme as those found in SciKit-Learn (e.g., `sklearn.linear_model.Lasso` or `sklearn.ensemble.RandomForestRegressor`)."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"class CustomLinearModel:\n",
" \"\"\"\n",
" Linear model: Y = XB, fit by minimizing the provided loss_function\n",
" with L2 regularization\n",
" \"\"\"\n",
" def __init__(self, loss_function=mean_absolute_percentage_error, \n",
" X=None, Y=None, sample_weights=None, beta_init=None, \n",
" regularization=0.00012):\n",
" self.regularization = regularization\n",
" self.beta = None\n",
" self.loss_function = loss_function\n",
" self.sample_weights = sample_weights\n",
" self.beta_init = beta_init\n",
" \n",
" self.X = X\n",
" self.Y = Y\n",
" \n",
" \n",
" def predict(self, X):\n",
" prediction = np.matmul(X, self.beta)\n",
" return(prediction)\n",
"\n",
" def model_error(self):\n",
" error = self.loss_function(\n",
" self.predict(self.X), self.Y, sample_weights=self.sample_weights\n",
" )\n",
" return(error)\n",
" \n",
" def l2_regularized_loss(self, beta):\n",
" self.beta = beta\n",
" return(self.model_error() + \\\n",
" sum(self.regularization*np.array(self.beta)**2))\n",
" \n",
" def fit(self, maxiter=250): \n",
" # Initialize beta estimates (you may need to normalize\n",
" # your data and choose smarter initialization values\n",
" # depending on the shape of your loss function)\n",
" if type(self.beta_init)==type(None):\n",
" # set beta_init = 1 for every feature\n",
" self.beta_init = np.array([1]*self.X.shape[1])\n",
" else: \n",
" # Use provided initial values\n",
" pass\n",
" \n",
" if self.beta!=None and all(self.beta_init == self.beta):\n",
" print(\"Model already fit once; continuing fit with more itrations.\")\n",
" \n",
" res = minimize(self.l2_regularized_loss, self.beta_init,\n",
" method='BFGS', options={'maxiter': 500})\n",
" self.beta = res.x\n",
" self.beta_init = self.beta"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([ 8.70454035, 5.56955027, 8.82671937, 1.10660836, 3.36271348,\n",
" 5.16710648, -0.08675964, 0.4776243 , 3.12646051, 5.28643399])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"l2_mape_model = CustomLinearModel(\n",
" loss_function=mean_absolute_percentage_error,\n",
" X=X, Y=Y, regularization=0.00012\n",
")\n",
"l2_mape_model.fit()\n",
"l2_mape_model.beta"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just to confirm that our regularization did work, let's make sure that the estimated betas found with regularization are different from those found without regularization (which we calculated earlier):"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
\n",
" \n",
"
\n",
"
\n",
"
true_beta
\n",
"
estimated_beta
\n",
"
regularized_beta
\n",
"
\n",
" \n",
" \n",
"
\n",
"
0
\n",
"
2
\n",
"
9.082524
\n",
"
8.704540
\n",
"
\n",
"
\n",
"
1
\n",
"
6
\n",
"
5.549958
\n",
"
5.569550
\n",
"
\n",
"
\n",
"
2
\n",
"
7
\n",
"
8.752331
\n",
"
8.826719
\n",
"
\n",
"
\n",
"
3
\n",
"
3
\n",
"
1.188371
\n",
"
1.106608
\n",
"
\n",
"
\n",
"
4
\n",
"
5
\n",
"
3.294825
\n",
"
3.362713
\n",
"
\n",
"
\n",
"
5
\n",
"
7
\n",
"
5.038865
\n",
"
5.167106
\n",
"
\n",
"
\n",
"
6
\n",
"
1
\n",
"
-0.225562
\n",
"
-0.086760
\n",
"
\n",
"
\n",
"
7
\n",
"
2
\n",
"
0.388307
\n",
"
0.477624
\n",
"
\n",
"
\n",
"
8
\n",
"
2
\n",
"
3.155243
\n",
"
3.126461
\n",
"
\n",
"
\n",
"
9
\n",
"
8
\n",
"
5.245992
\n",
"
5.286434
\n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
" true_beta estimated_beta regularized_beta\n",
"0 2 9.082524 8.704540\n",
"1 6 5.549958 5.569550\n",
"2 7 8.752331 8.826719\n",
"3 3 1.188371 1.106608\n",
"4 5 3.294825 3.362713\n",
"5 7 5.038865 5.167106\n",
"6 1 -0.225562 -0.086760\n",
"7 2 0.388307 0.477624\n",
"8 2 3.155243 3.126461\n",
"9 8 5.245992 5.286434"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.DataFrame({\n",
" \"true_beta\": beta, \n",
" \"estimated_beta\": beta_hat,\n",
" \"regularized_beta\": l2_mape_model.beta\n",
"})[[\"true_beta\", \"estimated_beta\", \"regularized_beta\"]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since our regularization parameter is so small, we can see that it didn't affect our coefficient estimates dramatically. But the fact that the betas are different between the two models indicates that our regularization does seem to be working."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just to make sure things are in the realm of common sense, it's never a bad idea to plot your predicted Y against our observed Y."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXQAAAD8CAYAAABn919SAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAGy1JREFUeJzt3X+MXPdZ7/H348202aQ069w6lrNpcG8VuVyIsJtV6ZUlRBPalLZK9qZKf0hF5ipS+IN7VXIr0w1CkKKiLAQo/IUwBWTdBm5Mk2zSFhoiOwU1ormsuw65uY4VAWlg7WtvIQsN3tKN/dw/9owzO3vOzJmZ8+N7vvN5SdbuzM7uPHN2/cz3PN/n+z3m7oiISPNtqzsAEREphhK6iEgklNBFRCKhhC4iEgkldBGRSCihi4hEQgldRCQSSugiIpFQQhcRicRlVT7ZW97yFt+9e3eVTyki0njHjx//trvv6Pe4ShP67t27WVxcrPIpRUQaz8y+ledxKrmIiERCCV1EJBJK6CIikVBCFxGJhBK6iEgkKu1yEZHxsrC0zANPnOL06hrXTk1y8NY9zO6brjusaCmhi0gpFpaWufeR51hbvwDA8uoa9z7yHICSeklUchGRUjzwxKlLybxtbf0CDzxxqqaI4qeELiKlOL26NtD9MjoldBEpxbVTkwPdL6NTQheRUhy8dQ+TrYlN9022Jjh4656aIoqfJkVFpBTtiU91uVRHCV1ESjO7b1oJvEIquYiIREIJXUQkEkroIiKRUEIXEYmEErqISCSU0EVEIqGELiISCSV0EZFIKKGLiESib0I3sz1mdqLj37+a2c+a2dVm9qSZvZh83F5FwCIikq5vQnf3U+6+1933AjcB54FHgTngqLvfABxNbouISE0GLbncAvytu38LuB04nNx/GJgtMjARERnMoAn9Y8AfJ5/vdPczAMnHa4oMTEREBpM7oZvZG4DbgD8Z5AnM7G4zWzSzxZWVlUHjExGRnAYZof8E8E13P5vcPmtmuwCSj+fSvsndD7n7jLvP7NixY7RoRUQk0yAJ/eO8Xm4BeBw4kHx+AHisqKBERGRwuRK6mV0BvBd4pOPueeC9ZvZi8rX54sMTEZG8cl2xyN3PA/+h675/YqPrRUREAqCVoiIikVBCFxGJhBK6iEgklNBFRCKhhC4iEgkldBGRSCihi4hEQgldRCQSSugiIpFQQhcRiUSupf8iIk2ysLTMA0+c4vTqGtdOTXLw1j3M7puuO6zSKaGLSFQWlpa595HnWFu/AMDy6hr3PvIcQPRJXSUXEYnKA0+cupTM29bWL/DAE6dqiqg6SugiEpXTq2sD3R8TJXQRicq1U5MD3R8TJXSRCCwsLbN//hhvm/sK++ePsbC0XHdItTl46x4mWxOb7ptsTXDw1j01RVQdTYqKNNw4TwKmab9mdbmISOP0mgQchySWZnbf9Fi+dpVcRBpunCcBZTMldJGGG+dJQNlMCV2k4cZ5ElA2Uw1dpOHGeRJQNlNCF4nAuE4CymYquYiIRCJXQjezKTP7opm9YGYnzew/m9nVZvakmb2YfNxedrAiIpIt7wj9t4Gvuvs7gB8GTgJzwFF3vwE4mtwWEZGa9K2hm9mbgR8FfgrA3b8HfM/Mbgd+LHnYYeBrwKfLCFJEwjeue5CHJM8I/T8CK8AfmtmSmX3ezK4Edrr7GYDk4zUlxikiAWtvP7C8uobz+vYD47ynTB3yJPTLgHcCv+Pu+4B/Y4DyipndbWaLZra4srIyZJgiErJx3oM8JHkS+j8C/+juzyS3v8hGgj9rZrsAko/n0r7Z3Q+5+4y7z+zYsaOImEUkMNp+IAx9a+ju/v/M7B/MbI+7nwJuAf5v8u8AMJ98fKzUSEUCFXrtuIr4rp2aZDkleWv7gWrlXVj034EHzewNwN8B/5WN0f0RM7sLeBm4s5wQRcIV+ta1VcV38NY9m54HtP1AHXIldHc/AcykfOmWYsMRaZbQt66tKj5tP5Cu6rM3Lf0XGUHoteMq49P2A5vVcfampf8iIwh969rQ44tZHZ0/SugiIwh969rQ44tZHWdvKrmIjCD02nHo8cWsjs4fc/fSfni3mZkZX1xcrOz5RETq0l1Dh42zo/vvuHHgN1QzO+7uaY0pm2iELiJSgjrOjpTQRaQUoS+4qkLVnT9K6CIlG8fEFvqCq1ipy0WkROO6C6E266qHRugiJQppJWmVZwqhL7iKlRK6SIlCSWxVl0CatllXLGUxlVxEShTKSs2qSyBNWtAUU1lMCV2kRKEktqrPFGb3TXP/HTcyPTWJAdNTk0P1X1chpnq/Si4iJQplpWaRJZC85YmmbNYVSlmsCEroIiULIbEVtV95jO2ITav396KSi8gYKKoEElN5oi2UslgRNEIXGRNFnCnEVJ5oC6UsVgQldBHJLabyRKcQymJFUMlFRHKLqTwRI43QRSS3mMoTMVJCFxlRLKsM84qlPBEjJXSJWtnJNsY2PmkuJXSJVhXJNqTNt6R+dZ+tKaFLtKpItjG28clwiTmEs7VcCd3MXgK+A1wAXnP3GTO7GngI2A28BHzE3V8pJ0yRwVWRbGNt40tT9+izKsMm5hDO1gZpW3yPu+/tuFDpHHDU3W8Ajia3RYJRxU6H49LGF9OOhP0Muxo2hLO1UfrQbwcOJ58fBmZHD0ekv4WlZfbPH+Ntc19h//yxzKRSRbKtalfBvK+5LDEu+c8ybGIOYavkvDV0B/7czBz4XXc/BOx09zMA7n7GzK5J+0Yzuxu4G+D6668vIGQZZ4OcDlfVM112G18ItdkQRp9VGbaMVtQGaKPIm9D3u/vpJGk/aWYv5H2CJPkfApiZmfEhYhS5ZNA6ZQw90yHUZkOcKyirpj9sYg5h0VWuhO7up5OP58zsUeBdwFkz25WMzncB50qMUwQYr5FiWwivOYTRZ6eiz1q63xw+fNM0T72wMnBirnsA0Tehm9mVwDZ3/07y+fuAXwYeBw4A88nHx8oMVATCHCmWLYTXHMLos1ORZy1pbw4PH18O9gpLveQZoe8EHjWz9uP/yN2/amZ/DRwxs7uAl4E7ywtTZENoI8UqhPKa6x59diryrCWEklZR+iZ0d/874IdT7v8n4JYyghLJEtpIsQrj+Jr7KfKsJYSSVlG0UlQaJ6SRYlXG8TX3UuRZSwglraJoP3QRaZwi+/9jWhymEbpIQ4zL0vu8ijpriamkpYQu0gAhLC6KWSwlLZVcRBpgnJbey/CU0EUaIKZODCmPSi4iDRB6J4bq+2HQCF2kAULuxGjS1rp171pZNiV0kQaoapveYTSlvt+kN55hqeQi0hChdmI0pb4f0xL/LBqhi8hIQriwQx5NeeMZhRK6iIwk5Pp+p6a88YxCCV1ERhJyfb9TU954RqEausiI1LI3eH2/jmMW0xL/LEroIiPQkvzB1XnMQp1YLooSusgIQuic6DfaDe0MIoRjFisldJER1N050W+0G+IZRN3HLGaaFJXSxbw6r+7OiX6LekJc9FP3MYuZErqUqkmr84Z54ymrcyJvLP1GuyGOhseh26QuKrmMgTprqEXVS8t+DcOWJorqnOh8fVNXtHj1u6+xftH7xtJv064QN/Uah26TuiihR67uGmoRI8QqXsMobzyjdk50v75Xzq9vecza+gU+86XntzxPv2trFnntzSLF3m1SF5VcIld3DbWIemkVr6HO0kTa60vzyvl1dneVYPot6pndN82Hb5pmwgyACTM+fJOSaaw0Qo9c3TXUIkaIVbyGOksTg76O7jOUXqPdhaVlHj6+zAXfKN9ccOfh48vMfP/VQZW8pBi5R+hmNmFmS2b25eT21Wb2pJm9mHzcXl6YMqy6OwqKWBZe1GvoNdFY50TdML+LvGcoRZzdNGlie9wNUnL5JHCy4/YccNTdbwCOJrclMCF0FMzum+bpuZv5+/kP8vTczQOP7Ip4Df2SUvcbz/YrWrzxsm3c89CJ0lst015fHnlG9kWc3dRdtpP8ciV0M7sO+CDw+Y67bwcOJ58fBmaLDU1G0R6N3vPQCS5vbWNqshX0xkm9FDHKz5OU2m88n/voXr67fpHVtfVKRqTt1zc12Rro+/KM7Is4u6m7bCf55a2h/xbwc8D3ddy3093PALj7GTO7pujgZDhpXROTrQk+99G9jUrknUbtihgkKdWxNH123zQPPHGK1bWtHS4ABnjH7bxnKEXMYVw12UqNSwuBwtN3hG5mHwLOufvxYZ7AzO42s0UzW1xZWRnmR8iAdIq81SAj1bpGpFk/34DPfXTvUGcoaWc377z+Kj515Fl2z32Ft9/7p/zCwnOZ37+wtMy/fe+1Lfe3tlntrY+yVZ4R+n7gNjP7AHA58GYz+wJw1sx2JaPzXcC5tG9290PAIYCZmRlPe4wUS6fIWw0yUq2r46XX845yhtL5vb+w8Bxf+MbLl752wf3S7c/O3rjlex944hTrF7b+t33T5Zc19mwvZn1H6O5+r7tf5+67gY8Bx9z9E8DjwIHkYQeAx0qLUgZSd2dLEYre/2WQOnxdE8lVPO8fP/MPA92fNQh45fx6dPvyxGCUPvR54IiZ3QW8DNxZTEgyqlBXB+Y1yMrQPP3R3Y/pN5dQ19L0Kp633Y+e9/6sswYIY+dG2cw84xdZhpmZGV9cXKzs+cZZkxeC7J8/lppEpqcmeXru5ku3uxM/bLxxdY688zxmUE0+tm+/909Tk/eEGX97/we23J92/Lp1/17yaPIxrIOZHXf3mX6P00rRSDV5r4y8cwBZk7+fOvIs8HrnSJEdK3XvjTOqj//IWzfV0DvvT9N51pA1Uh90bqbpxzBk2stFgpN3DiArkVxwv9Q3XvQEcdM7iD47eyOfePf1m/Z2+cS7r0+dEG1r9+dPFzQ30/RjGDKN0CU4eecAetV32wlimI6VXuWAGDqIPjt7Y88EnqWouZkYjmGoNEKX4OTtSOm3ZP706trAnSP9tgiIoYNoWEWs2IXxPoZl0whdgpRnDqD99U8deTZ1oq/dvw3pnSNpI/F+NfemdhAtLC3zmS89f2mv9anJFvfd9oMDJ+Mi5maaegybQAldGq2dXHoliLQklDUxl9XN0S4HNPFqOwtLyxz84rObFgitrq1z8E9enzyuUhOPYVMooUvjdXdiTJhtmmRLSxRZI/EJs8zRfufzNSn5ZK32XL/ope5P00vTjmFTqIYuUWiXQyZbE5cS8vLqGvc8dCJ1r5JeHTJ1bzdctF6TjZqIjIsSukQjbdTtwIPfeHnLEvWsCbj2RN+oE38h6TXZqInIuKjkIj01aUVf1mjTYUtpodfEXNXlgLKP8cFb92ypoYN2TIyRErpkatqKvl596d3JPpSJuSqOcfvnFNHlImHTXi6SKe+eKqFYWFrmnodOkPYXHWrMTTvGUg/t5SIjC2FF3yDliNl90yx+65958BsvD3V1nzqEcIwlHpoUlUx1r+gb5mrzn529ceir+9Sh7mMscdEIXTLVvaJv2J0Sm9TjXPcxlrhohC6Zitq7Y1jjUI7oPsZTky0ub23jnodO6IpAMjCN0BuoylbCOke7dV3bM48ifwftY9y0riIJjxJ6w4T4n75746crWtt4w2UT/Mvaeu7LwqU9pqpyxKDJudfvAIZvhSz6YhwyfpTQGya0//RpGz+dX7/I+fWLQPobTt43pSp6xQd9g1xYWk7d3XFt/QL3Pf48//7axaHfbMehxCTlUkKvSPco8D3v2MFTL6wMnKhC+0+ftfFTp1EuC1d2yWeQWNrJP+uCyqtr61vuG+TNNuQSkzSDEnoF0kaBndd1zBrJpZUCQvtPn7Uys1v7snAQ1pvSILGkJf9RnqObOl5kVOpyqUCeRNB9TcWsHuz3vGNHMLsBLiwtYwM8vj1SzxrP1/GmNEgfeK/EPNmaYPsVrYGeo1vdXUXSfBqhVyDvCK3zcVmlgKdeWOH+O27MfQWeopJB1tV9Bt04IqtcUdeb0iCj4qyzowkz7r9j4xqdo46wm9RDL+FRQq9Ar02juh/X1qsUMMgVeNoGTfSdCXzqihavfvc11i++vs94r6v7ABjkTvbTFW+M1f3m9OGbpnPNZ2Ql/+5RdN0bfsn46pvQzexy4C+BNyaP/6K7/5KZXQ08BOwGXgI+4u6vlBdqc6Ulgm7dI7mrJlupk2xXTaaf1meN6D/zpef57vpgnRfdbw7tdsTun511dZ92gu73mmEj8Ve5CVXaG9/Dx5dzlTbydN1ohC11yjNC/3fgZnd/1cxawNfN7M+AO4Cj7j5vZnPAHPDpEmOtXFEljLRE0K/LxTKK01n3Z43os5Jxr86LvJN/7av7ZO0p3v5Zp1fX2Jbj0m5VGLXtszNht/8+7nnohEbjEoS+Cd039td9NbnZSv45cDvwY8n9h4GvEVFCL3oBz6Ajt9WURNzr/rxlnbbl1TX2zx9LfUPJW/Of7qilp/2c7uQXQgdHUR02IS7wEsnV5WJmE2Z2AjgHPOnuzwA73f0MQPLxmvLCrF6vkVwVBt2Fr309zU6TrQmmMko0Bpm7GOYZNXeOxJ+eu5m/n/8gT8/d3HNr2xA6OIra3bDuvw+RNLkSurtfcPe9wHXAu8zsh/I+gZndbWaLZra4srIybJyVq7tXOitBZ41osxLmfbf94JafA1snLDuTUdpztyaMqcnWSMk4b/Iv06DHNUvdfx8iaQbqcnH3VTP7GvB+4KyZ7XL3M2a2i43Re9r3HAIOwcYVi0aMtzJ1L+AZZtl7Vlkn7aIPadrJKJTLs5WhqNdW99+HSJo8XS47gPUkmU8CPw78KvA4cACYTz4+VmagVQth1V5RHRNPvbCSq4WwMxnF3K1RxGsL4e9DpFueEfou4LCZTbBRojni7l82s78CjpjZXcDLwJ0lxlm5mEapecoAMSajMhdaxfT3IfHQRaLHQNaFiCfMuOgeZTLK6qrRUnppIl0kWi7Ju8IxJqFtMyxSBSX0MTCO5QF1ocg4UkIfEzFPcqZRF4qMIyV0iUbnJOhVky1aE7bp4hsxTvyKdIoioVd50WQZXRm/r+5J0NW1dVrbjO1XtFg9n31tU5GYND6ha0+NZinr95U2Cbp+0bniDZex9IvvGz5gkQZp/BWLtKfGVgtLy+yfP8bb5r7C/vljl/ZoCUFZvy9NgopEkND1H3mzrEvXhZLUy/p9FbXplkiTNT6h6z/yZqGfsZTx+1pYWub8917bcr8mQWXcND6hF7V7XizqOGMZpMRT9O+rfUbSfSGPqclW1AunRNI0flJ0HBfN9FJ1//Wgk5xF/76yrq505RsvG9u/ARlfjU/oMH6LZnqpehfAYZbYF/n70hyKyOsaX3KRzaq+MlDdCVVzKCKvi2KELptVecZS9xJ77Usu8jqN0GUkdU9Kh3KtUpEQaIQeqaq2QwhhUlpzKCIblNAjVPV2CEqoImFQySVCoS8uEpFyKKFHqO7OExGphxJ6hNTKJzKelNAjVHfniYjUQ5OiEQqh86RqusiJiBJ6tMap80QXORHZoIQeGI00BzfMfjIiMepbQzezt5rZU2Z20syeN7NPJvdfbWZPmtmLycft5Ycbt9AvThEqdfWIbMgzKfoa8Cl3/wHg3cDPmNl/AuaAo+5+A3A0uS0jUP/4cNTVI7Khb0J39zPu/s3k8+8AJ4Fp4HbgcPKww8BsWUGOC400h6OuHpENA7UtmtluYB/wDLDT3c/ARtIHrsn4nrvNbNHMFldWVkaLNnIaaQ5HG3SJbDB3z/dAszcBfwH8irs/Ymar7j7V8fVX3L1nHX1mZsYXFxdHCjhm3d0asDHSVHISGW9mdtzdZ/o9LleXi5m1gIeBB939keTus2a2y93PmNku4Nzw4Q4nto6QcewfF5Hi9E3oZmbA7wMn3f03O770OHAAmE8+PlZKhBli7T0ep/5xESlWnhr6fuAngZvN7ETy7wNsJPL3mtmLwHuT25VRR4iIyGZ9R+ju/nXAMr58S7Hh5BdLR0hsZSMRqU9jN+eKoSNEC4lEpEiNTegx9B6rbCQiRWrsXi4xdITEUjYSkTA0NqFD8ztCrp2aZDkleTepbCQi4WhsySUGMZSNRCQcjR6hN10MZSMRCYcSes2aXjYSkXCo5CIiEongR+haeCMikk/QCT3W/VpERMoQdMlFC29ERPILOqFr4Y2ISH5BJ/QY9msREalK0AldC29ERPILelJUC29ERPILOqGDFt6IiOQVdMlFRETyU0IXEYmEErqISCSU0EVEIqGELiISCXP36p7MbAX4Vp+HvQX4dgXhlEXx10vx16vJ8Ycc+/e7+45+D6o0oedhZovuPlN3HMNS/PVS/PVqcvxNjr1NJRcRkUgooYuIRCLEhH6o7gBGpPjrpfjr1eT4mxw7EGANXUREhhPiCF1ERIZQa0I3s7ea2VNmdtLMnjezTyb3X21mT5rZi8nH7XXGmaZH7PeZ2bKZnUj+faDuWNOY2eVm9r/N7Nkk/s8k9wd/7KFn/I04/m1mNmFmS2b25eR2I45/W0r8jTn+ZvaSmT2XxLmY3Neo49+t1pKLme0Cdrn7N83s+4DjwCzwU8A/u/u8mc0B293907UFmqJH7B8BXnX3X681wD7MzIAr3f1VM2sBXwc+CdxB4Mceesb/fhpw/NvM7H8AM8Cb3f1DZvZrNOD4t6XEfx8NOf5m9hIw4+7f7rivUce/W60jdHc/4+7fTD7/DnASmAZuBw4nDzvMRqIMSo/YG8E3vJrcbCX/nAYce+gZf2OY2XXAB4HPd9zdiOMPmfE3XWOOf5pgauhmthvYBzwD7HT3M7CROIFr6ousv67YAf6bmf2Nmf1ByKdsyenyCeAc8KS7N+rYZ8QPDTn+wG8BPwdc7LivMcef9PihOcffgT83s+NmdndyX5OO/xZBJHQzexPwMPCz7v6vdccziJTYfwd4O7AXOAP8Ro3h9eTuF9x9L3Ad8C4z+6G6YxpERvyNOP5m9iHgnLsfrzuWYfSIvxHHP7Hf3d8J/ATwM2b2o3UHNKraE3pS/3wYeNDdH0nuPpvUqNu16nN1xddLWuzufjZJNBeB3wPeVWeMebj7KvA1NurPjTj2nTrjb9Dx3w/cltRx/xdws5l9geYc/9T4G3T8cffTycdzwKNsxNqU45+q7i4XA34fOOnuv9nxpceBA8nnB4DHqo6tn6zY238Mif8C/J+qY8vDzHaY2VTy+STw48ALNODYQ3b8TTn+7n6vu1/n7ruBjwHH3P0TNOT4Z8XflONvZlcmzQyY2ZXA+9iItRHHP0vd1xTdD/wk8FxSCwX4eWAeOGJmdwEvA3fWFF8vWbF/3Mz2slGfewn46XrC62sXcNjMJth4Yz/i7l82s78i/GMP2fH/z4Yc/yxN+Nvv5dcacvx3Ao9ujMu4DPgjd/+qmf01DT7+WikqIhKJ2mvoIiJSDCV0EZFIKKGLiERCCV1EJBJK6CIikVBCFxGJhBK6iEgklNBFRCLx/wH++cyybkIW8AAAAABJRU5ErkJggg==\n",
"text/plain": [
"