{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 4 Pre-Processing and Training Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.1 Contents\n",
"* [4 Pre-Processing and Training Data](#4_Pre-Processing_and_Training_Data)\n",
" * [4.1 Contents](#4.1_Contents)\n",
" * [4.2 Introduction](#4.2_Introduction)\n",
" * [4.3 Imports](#4.3_Imports)\n",
" * [4.4 Load Data](#4.4_Load_Data)\n",
" * [4.5 Extract Big Mountain Data](#4.5_Extract_Big_Mountain_Data)\n",
" * [4.6 Train/Test Split](#4.6_Train/Test_Split)\n",
" * [4.7 Initial Not-Even-A-Model](#4.7_Initial_Not-Even-A-Model)\n",
" * [4.7.1 Metrics](#4.7.1_Metrics)\n",
" * [4.7.1.1 R-squared, or coefficient of determination](#4.7.1.1_R-squared,_or_coefficient_of_determination)\n",
" * [4.7.1.2 Mean Absolute Error](#4.7.1.2_Mean_Absolute_Error)\n",
" * [4.7.1.3 Mean Squared Error](#4.7.1.3_Mean_Squared_Error)\n",
" * [4.7.2 sklearn metrics](#4.7.2_sklearn_metrics)\n",
" * [4.7.2.0.1 R-squared](#4.7.2.0.1_R-squared)\n",
" * [4.7.2.0.2 Mean absolute error](#4.7.2.0.2_Mean_absolute_error)\n",
" * [4.7.2.0.3 Mean squared error](#4.7.2.0.3_Mean_squared_error)\n",
" * [4.7.3 Note On Calculating Metrics](#4.7.3_Note_On_Calculating_Metrics)\n",
" * [4.8 Initial Models](#4.8_Initial_Models)\n",
" * [4.8.1 Imputing missing feature (predictor) values](#4.8.1_Imputing_missing_feature_(predictor)_values)\n",
" * [4.8.1.1 Impute missing values with median](#4.8.1.1_Impute_missing_values_with_median)\n",
" * [4.8.1.1.1 Learn the values to impute from the train set](#4.8.1.1.1_Learn_the_values_to_impute_from_the_train_set)\n",
" * [4.8.1.1.2 Apply the imputation to both train and test splits](#4.8.1.1.2_Apply_the_imputation_to_both_train_and_test_splits)\n",
" * [4.8.1.1.3 Scale the data](#4.8.1.1.3_Scale_the_data)\n",
" * [4.8.1.1.4 Train the model on the train split](#4.8.1.1.4_Train_the_model_on_the_train_split)\n",
" * [4.8.1.1.5 Make predictions using the model on both train and test splits](#4.8.1.1.5_Make_predictions_using_the_model_on_both_train_and_test_splits)\n",
" * [4.8.1.1.6 Assess model performance](#4.8.1.1.6_Assess_model_performance)\n",
" * [4.8.1.2 Impute missing values with the mean](#4.8.1.2_Impute_missing_values_with_the_mean)\n",
" * [4.8.1.2.1 Learn the values to impute from the train set](#4.8.1.2.1_Learn_the_values_to_impute_from_the_train_set)\n",
" * [4.8.1.2.2 Apply the imputation to both train and test splits](#4.8.1.2.2_Apply_the_imputation_to_both_train_and_test_splits)\n",
" * [4.8.1.2.3 Scale the data](#4.8.1.2.3_Scale_the_data)\n",
" * [4.8.1.2.4 Train the model on the train split](#4.8.1.2.4_Train_the_model_on_the_train_split)\n",
" * [4.8.1.2.5 Make predictions using the model on both train and test splits](#4.8.1.2.5_Make_predictions_using_the_model_on_both_train_and_test_splits)\n",
" * [4.8.1.2.6 Assess model performance](#4.8.1.2.6_Assess_model_performance)\n",
" * [4.8.2 Pipelines](#4.8.2_Pipelines)\n",
" * [4.8.2.1 Define the pipeline](#4.8.2.1_Define_the_pipeline)\n",
" * [4.8.2.2 Fit the pipeline](#4.8.2.2_Fit_the_pipeline)\n",
" * [4.8.2.3 Make predictions on the train and test sets](#4.8.2.3_Make_predictions_on_the_train_and_test_sets)\n",
" * [4.8.2.4 Assess performance](#4.8.2.4_Assess_performance)\n",
" * [4.9 Refining The Linear Model](#4.9_Refining_The_Linear_Model)\n",
" * [4.9.1 Define the pipeline](#4.9.1_Define_the_pipeline)\n",
" * [4.9.2 Fit the pipeline](#4.9.2_Fit_the_pipeline)\n",
" * [4.9.3 Assess performance on the train and test set](#4.9.3_Assess_performance_on_the_train_and_test_set)\n",
" * [4.9.4 Define a new pipeline to select a different number of features](#4.9.4_Define_a_new_pipeline_to_select_a_different_number_of_features)\n",
" * [4.9.5 Fit the pipeline](#4.9.5_Fit_the_pipeline)\n",
" * [4.9.6 Assess performance on train and test data](#4.9.6_Assess_performance_on_train_and_test_data)\n",
" * [4.9.7 Assessing performance using cross-validation](#4.9.7_Assessing_performance_using_cross-validation)\n",
" * [4.9.8 Hyperparameter search using GridSearchCV](#4.9.8_Hyperparameter_search_using_GridSearchCV)\n",
" * [4.10 Random Forest Model](#4.10_Random_Forest_Model)\n",
" * [4.10.1 Define the pipeline](#4.10.1_Define_the_pipeline)\n",
" * [4.10.2 Fit and assess performance using cross-validation](#4.10.2_Fit_and_assess_performance_using_cross-validation)\n",
" * [4.10.3 Hyperparameter search using GridSearchCV](#4.10.3_Hyperparameter_search_using_GridSearchCV)\n",
" * [4.11 Final Model Selection](#4.11_Final_Model_Selection)\n",
" * [4.11.1 Linear regression model performance](#4.11.1_Linear_regression_model_performance)\n",
" * [4.11.2 Random forest regression model performance](#4.11.2_Random_forest_regression_model_performance)\n",
" * [4.11.3 Conclusion](#4.11.3_Conclusion)\n",
" * [4.12 Data quantity assessment](#4.12_Data_quantity_assessment)\n",
" * [4.13 Save best model object from pipeline](#4.13_Save_best_model_object_from_pipeline)\n",
" * [4.14 Summary](#4.14_Summary)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.2 Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In preceding notebooks, we performed preliminary assessments of data quality and refined the question to be answered. We found a small number of observations that clearly indicated whether to replace values or drop a whole row. We determined that predicting the adult weekend ticket price was our primary aim. We threw away records with missing price data, but not before making the most of the other available data to look for any patterns among the states. We didn't see any and decided to treat all states equally; the state label didn't seem to be particularly useful.\n",
"\n",
"In this notebook, we'll start to build machine learning models. Before diving into a machine learning model, however, we'll start by considering how useful the mean value is as a predictor. We never want to go to stakeholders with a machine learning model only to have the CEO point out that it performs worse than just guessing the average! Our first model is always a baseline performance comparitor for any subsequent model. Next, we'll build up the process of efficiently creating robust models to compare to our baseline forecast. We can validate steps with our own functions for checking expected equivalences between, say, pandas and sklearn implementations."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.3 Imports"
]
},
{
"cell_type": "code",
"execution_count": 88,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import os\n",
"import pickle\n",
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"from sklearn import __version__ as sklearn_version\n",
"from sklearn.decomposition import PCA\n",
"from sklearn.preprocessing import scale\n",
"from sklearn.model_selection import train_test_split, cross_validate, GridSearchCV, learning_curve\n",
"from sklearn.preprocessing import StandardScaler, MinMaxScaler\n",
"from sklearn.dummy import DummyRegressor\n",
"from sklearn.linear_model import LinearRegression\n",
"from sklearn.ensemble import RandomForestRegressor\n",
"from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.feature_selection import SelectKBest, f_regression\n",
"import datetime\n",
"\n",
"from library.sb_utils import save_file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.4 Load Data"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" 124\n",
"Name Big Mountain Resort\n",
"Region Montana\n",
"state Montana\n",
"summit_elev 6817\n",
"vertical_drop 2353\n",
"base_elev 4464\n",
"trams 0\n",
"fastSixes 0\n",
"fastQuads 3\n",
"quad 2\n",
"triple 6\n",
"double 0\n",
"surface 3\n",
"total_chairs 14\n",
"Runs 105.0\n",
"TerrainParks 4.0\n",
"LongestRun_mi 3.3\n",
"SkiableTerrain_ac 3000.0\n",
"Snow Making_ac 600.0\n",
"daysOpenLastYear 123.0\n",
"yearsOpen 72.0\n",
"averageSnowfall 333.0\n",
"AdultWeekend 81.0\n",
"projectedDaysOpen 123.0\n",
"NightSkiing_ac 600.0\n",
"resorts_per_state 12\n",
"resorts_per_100kcapita 1.122778\n",
"resorts_per_100ksq_mile 8.161045\n",
"resort_skiable_area_ac_state_ratio 0.140121\n",
"resort_days_open_state_ratio 0.129338\n",
"resort_terrain_park_state_ratio 0.148148\n",
"resort_night_skiing_state_ratio 0.84507\n",
"total_chairs_runs_ratio 0.133333\n",
"total_chairs_skiable_ratio 0.004667\n",
"fastQuads_runs_ratio 0.028571\n",
"fastQuads_skiable_ratio 0.001"
]
},
"execution_count": 91,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"big_mountain.T"
]
},
{
"cell_type": "code",
"execution_count": 92,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(278, 36)"
]
},
"execution_count": 92,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ski_data.shape"
]
},
{
"cell_type": "code",
"execution_count": 93,
"metadata": {},
"outputs": [],
"source": [
"ski_data = ski_data[ski_data.Name != 'Big Mountain Resort']"
]
},
{
"cell_type": "code",
"execution_count": 94,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(277, 36)"
]
},
"execution_count": 94,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ski_data.shape"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.6 Train/Test Split"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So far, we've treated ski resort data as a single entity. In machine learning, when we train our model on all of our data, we end up with no data set aside to evaluate model performance. We could keep making more and more complex models that fit the data better and better and not realise we are overfitting the model. By partitioning the data into training and testing splits, without letting a model (or missing-value imputation) learn anything about the test split, we have a somewhat independent assessment of how our model might perform in the future. An often overlooked subtlety here is that people all too frequently use the test set to assess model performance _and then compare multiple models to pick the best_. This means their overall model selection process is flawed: The engineer picks the model sans help from the test set. Instead we use held-out data and/or k-fold cross-validation to simulate additional test sets and assess model performance. The formal test set is very useful as a final check on expected future performance."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What partition sizes would we have with a 70/30 train/test split?"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(193.89999999999998, 83.1)"
]
},
"execution_count": 95,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(ski_data) * .7, len(ski_data) * .3"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [],
"source": [
"# Generate test and train sets for X and Y variables; set random state to get reproducable results\n",
"X_train, X_test, y_train, y_test = train_test_split(ski_data.drop(columns='AdultWeekend'), \n",
" ski_data.AdultWeekend, test_size=0.3, \n",
" random_state=47)"
]
},
{
"cell_type": "code",
"execution_count": 97,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"((193, 35), (84, 35))"
]
},
"execution_count": 97,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check the shapes of X train and test sets\n",
"X_train.shape, X_test.shape"
]
},
{
"cell_type": "code",
"execution_count": 98,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"((193,), (84,))"
]
},
"execution_count": 98,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check the shapes of train Y train and test sets\n",
"y_train.shape, y_test.shape"
]
},
{
"cell_type": "code",
"execution_count": 99,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"((193, 32), (84, 32))"
]
},
"execution_count": 99,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"names_list = ['Name', 'state', 'Region']\n",
"names_train = X_train[names_list]\n",
"names_test = X_test[names_list]\n",
"X_train.drop(columns=names_list, inplace=True)\n",
"X_test.drop(columns=names_list, inplace=True)\n",
"X_train.shape, X_test.shape"
]
},
{
"cell_type": "code",
"execution_count": 100,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"summit_elev int64\n",
"vertical_drop int64\n",
"base_elev int64\n",
"trams int64\n",
"fastSixes int64\n",
"fastQuads int64\n",
"quad int64\n",
"triple int64\n",
"double int64\n",
"surface int64\n",
"total_chairs int64\n",
"Runs float64\n",
"TerrainParks float64\n",
"LongestRun_mi float64\n",
"SkiableTerrain_ac float64\n",
"Snow Making_ac float64\n",
"daysOpenLastYear float64\n",
"yearsOpen float64\n",
"averageSnowfall float64\n",
"projectedDaysOpen float64\n",
"NightSkiing_ac float64\n",
"resorts_per_state int64\n",
"resorts_per_100kcapita float64\n",
"resorts_per_100ksq_mile float64\n",
"resort_skiable_area_ac_state_ratio float64\n",
"resort_days_open_state_ratio float64\n",
"resort_terrain_park_state_ratio float64\n",
"resort_night_skiing_state_ratio float64\n",
"total_chairs_runs_ratio float64\n",
"total_chairs_skiable_ratio float64\n",
"fastQuads_runs_ratio float64\n",
"fastQuads_skiable_ratio float64\n",
"dtype: object"
]
},
"execution_count": 100,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Check the `dtypes` attribute of `X_train` to verify all features are numeric\n",
"X_train.dtypes"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"summit_elev int64\n",
"vertical_drop int64\n",
"base_elev int64\n",
"trams int64\n",
"fastSixes int64\n",
"fastQuads int64\n",
"quad int64\n",
"triple int64\n",
"double int64\n",
"surface int64\n",
"total_chairs int64\n",
"Runs float64\n",
"TerrainParks float64\n",
"LongestRun_mi float64\n",
"SkiableTerrain_ac float64\n",
"Snow Making_ac float64\n",
"daysOpenLastYear float64\n",
"yearsOpen float64\n",
"averageSnowfall float64\n",
"projectedDaysOpen float64\n",
"NightSkiing_ac float64\n",
"resorts_per_state int64\n",
"resorts_per_100kcapita float64\n",
"resorts_per_100ksq_mile float64\n",
"resort_skiable_area_ac_state_ratio float64\n",
"resort_days_open_state_ratio float64\n",
"resort_terrain_park_state_ratio float64\n",
"resort_night_skiing_state_ratio float64\n",
"total_chairs_runs_ratio float64\n",
"total_chairs_skiable_ratio float64\n",
"fastQuads_runs_ratio float64\n",
"fastQuads_skiable_ratio float64\n",
"dtype: object"
]
},
"execution_count": 101,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Repeat this check for the test split in `X_test`\n",
"X_test.dtypes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have only numeric features in X now!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.7 Initial Not-Even-A-Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll begin by determining how good the mean is as a predictor. In other words, what if we simply say our best guess is the average price?"
]
},
{
"cell_type": "code",
"execution_count": 102,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"63.84730569948186"
]
},
"execution_count": 102,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Calculate the mean of `y_train`\n",
"train_mean = y_train.mean()\n",
"train_mean"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`sklearn`'s `DummyRegressor` easily does this:"
]
},
{
"cell_type": "code",
"execution_count": 103,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[63.8473057]])"
]
},
"execution_count": 103,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Sanity Check\n",
"dumb_reg = DummyRegressor(strategy='mean')\n",
"dumb_reg.fit(X_train, y_train)\n",
"dumb_reg.constant_"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Having established the grand mean, we need to determine how closely it matches, or explains, the actual values. There are many ways of assessing how good one set of values agrees with another, which brings us to the subject of metrics."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.7.1 Metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.7.1.1 R-squared, or coefficient of determination"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One measure is $R^2$, the [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination). This is a measure of the proportion of variance in the dependent variable (our ticket price) that is predicted by our \"model\". The linked Wikipedia articles gives a nice explanation of how negative values can arise. This is frequently a cause of confusion for newcomers who, reasonably, ask how can a squared value be negative?\n",
"\n",
"Recall the mean can be denoted by $\\bar{y}$, where\n",
"\n",
"$$\\bar{y} = \\frac{1}{n}\\sum_{i=1}^ny_i$$\n",
"\n",
"and where $y_i$ are the individual values of the dependent variable.\n",
"\n",
"The total sum of squares (error), can be expressed as\n",
"\n",
"$$SS_{tot} = \\sum_i(y_i-\\bar{y})^2$$\n",
"\n",
"The above formula should be familiar as it's simply the variance without the denominator to scale (divide) by the sample size.\n",
"\n",
"The residual sum of squares is similarly defined to be\n",
"\n",
"$$SS_{res} = \\sum_i(y_i-\\hat{y})^2$$\n",
"\n",
"where $\\hat{y}$ are our predicted values for the depended variable.\n",
"\n",
"The coefficient of determination, $R^2$, here is given by\n",
"\n",
"$$R^2 = 1 - \\frac{SS_{res}}{SS_{tot}}$$\n",
"\n",
"Putting it into words, it's one minus the ratio of the residual variance to the original variance. Thus, the baseline model here, which always predicts $\\bar{y}$, should give $R^2=0$. A model that perfectly predicts the observed values would have no residual error and so give $R^2=1$. Models that do worse than predicting the mean will have increased the sum of squares of residuals and so produce a negative $R^2$."
]
},
{
"cell_type": "code",
"execution_count": 104,
"metadata": {},
"outputs": [],
"source": [
"#Calculate the R^2 as defined above\n",
"def r_squared(y, ypred):\n",
" \"\"\"R-squared score.\n",
" \n",
" Calculate the R-squared, or coefficient of determination, of the input.\n",
" \n",
" Arguments:\n",
" y -- the observed values\n",
" ypred -- the predicted values\n",
" \"\"\"\n",
" ybar = np.mean(y)\n",
" sum_sq_tot = np.sum((y - ybar)**2)\n",
" sum_sq_res = np.sum((y - ypred)**2)\n",
" R2 = 1.0 - sum_sq_res / sum_sq_tot\n",
" return R2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We make our predictions by creating an array of length the size of the training set with the single value of the mean."
]
},
{
"cell_type": "code",
"execution_count": 105,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([63.8473057, 63.8473057, 63.8473057, 63.8473057, 63.8473057])"
]
},
"execution_count": 105,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y_tr_pred_ = train_mean * np.ones(len(y_train))\n",
"y_tr_pred_[:5]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Remember the `sklearn` dummy regressor? "
]
},
{
"cell_type": "code",
"execution_count": 106,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([63.8473057, 63.8473057, 63.8473057, 63.8473057, 63.8473057])"
]
},
"execution_count": 106,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y_tr_pred = dumb_reg.predict(X_train)\n",
"y_tr_pred[:5]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that `DummyRegressor` produces exactly the same results and saves us from having to broadcast the mean (or whichever other statistic we used - check out the [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyRegressor.html) to see what's available) to an array of the appropriate length. It also gives us an object with `fit()` and `predict()` methods as well, so we can use them as conveniently as any other `sklearn` estimator."
]
},
{
"cell_type": "code",
"execution_count": 107,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0"
]
},
"execution_count": 107,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r_squared(y_train, y_tr_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exactly as expected, if we use the average value as our prediction, we get an $R^2$ of zero _on our training set_. What if we use this \"model\" to predict unseen values from the test set? Remember, of course, that our \"model\" is trained on the training set; we still use the training set mean as our prediction."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Make predictions by creating an array of length the size of the test set with the single value of the (training) mean."
]
},
{
"cell_type": "code",
"execution_count": 108,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"-0.0015364646867073173"
]
},
"execution_count": 108,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y_te_pred = train_mean * np.ones(len(y_test))\n",
"r_squared(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generally, you can expect performance on a test set to be slightly worse than on the training set. As you are getting an $R^2$ of zero on the training set, there's nowhere to go but negative!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$R^2$ is a common metric, and interpretable in terms of the amount of variance explained, it's less appealing if we want an idea of how \"close\" our predictions are to the true values. Metrics that summarise the difference between predicted and actual values are _mean absolute error_ and _mean squared error_."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.7.1.2 Mean Absolute Error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is very simply the average of the absolute errors:\n",
"\n",
"$$MAE = \\frac{1}{n}\\sum_i^n|y_i - \\hat{y}|$$"
]
},
{
"cell_type": "code",
"execution_count": 109,
"metadata": {},
"outputs": [],
"source": [
"def mae(y, ypred):\n",
" \"\"\"Mean absolute error.\n",
" \n",
" Calculate the mean absolute error of the arguments\n",
"\n",
" Arguments:\n",
" y -- the observed values\n",
" ypred -- the predicted values\n",
" \"\"\"\n",
" abs_error = np.abs(y - ypred)\n",
" mae = np.mean(abs_error)\n",
" return mae"
]
},
{
"cell_type": "code",
"execution_count": 110,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"18.149503610835193"
]
},
"execution_count": 110,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mae(y_train, y_tr_pred)"
]
},
{
"cell_type": "code",
"execution_count": 111,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"18.672179249938317"
]
},
"execution_count": 111,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mae(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Mean absolute error is arguably the most intuitive of all the metrics, this essentially tells you that, on average, you might expect to be off by around \\\\$19 if you guessed ticket price based on an average of known values."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.7.1.3 Mean Squared Error"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another common metric (and an important one internally for optimizing machine learning models) is the mean squared error. This is simply the average of the square of the errors:\n",
"\n",
"$$MSE = \\frac{1}{n}\\sum_i^n(y_i - \\hat{y})^2$$"
]
},
{
"cell_type": "code",
"execution_count": 112,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Calculate the MSE as defined above\n",
"def mse(y, ypred):\n",
" \"\"\"Mean square error.\n",
" \n",
" Calculate the mean square error of the arguments\n",
"\n",
" Arguments:\n",
" y -- the observed values\n",
" ypred -- the predicted values\n",
" \"\"\"\n",
" sq_error = (y - ypred)**2\n",
" mse = np.mean(sq_error)\n",
" return mse"
]
},
{
"cell_type": "code",
"execution_count": 113,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"616.9493046578431"
]
},
"execution_count": 113,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mse(y_train, y_tr_pred)"
]
},
{
"cell_type": "code",
"execution_count": 114,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"574.1671108060107"
]
},
"execution_count": 114,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mse(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So here, we get a slightly better MSE on the test set than we did on the train set. And what does a squared error mean anyway? To convert this back to our measurement space, we often take the square root, to form the _root mean square error_ thus:"
]
},
{
"cell_type": "code",
"execution_count": 115,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([24.83846422, 23.96178438])"
]
},
"execution_count": 115,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.sqrt([mse(y_train, y_tr_pred), mse(y_test, y_te_pred)])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.7.2 sklearn metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Functions are good, but you don't want to have to define functions every time we want to assess performance. `sklearn.metrics` provides many commonly used metrics, included the ones above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.7.2.0.1 R-squared"
]
},
{
"cell_type": "code",
"execution_count": 116,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.0, -0.0015364646867073173)"
]
},
"execution_count": 116,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.7.2.0.2 Mean absolute error"
]
},
{
"cell_type": "code",
"execution_count": 117,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(18.149503610835193, 18.672179249938317)"
]
},
"execution_count": 117,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.7.2.0.3 Mean squared error"
]
},
{
"cell_type": "code",
"execution_count": 118,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(616.9493046578431, 574.1671108060107)"
]
},
"execution_count": 118,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.7.3 Note On Calculating Metrics"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In a Jupyter code cell, running `r2_score?` will bring up the docstring for the function, and `r2_score??` will bring up the actual code of the function! Here we try it and compare the source for `sklearn`'s function with ours."
]
},
{
"cell_type": "code",
"execution_count": 119,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.0, -3.054984985780873e+30)"
]
},
"execution_count": 119,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# train set - sklearn\n",
"# correct order, incorrect order\n",
"r2_score(y_train, y_tr_pred), r2_score(y_tr_pred, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 120,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(-0.0015364646867073173, -2.8431378228302645e+30)"
]
},
"execution_count": 120,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# test set - sklearn\n",
"# correct order, incorrect order\n",
"r2_score(y_test, y_te_pred), r2_score(y_te_pred, y_test)"
]
},
{
"cell_type": "code",
"execution_count": 121,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.0, -3.054984985780873e+30)"
]
},
"execution_count": 121,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# train set - using our homebrew function\n",
"# correct order, incorrect order\n",
"r_squared(y_train, y_tr_pred), r_squared(y_tr_pred, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 122,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(-0.0015364646867073173, -2.8431378228302645e+30)"
]
},
"execution_count": 122,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# test set - using our homebrew function\n",
"# correct order, incorrect order\n",
"r_squared(y_test, y_te_pred), r_squared(y_te_pred, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can get very different results swapping the argument order. It's worth highlighting this because data scientists do this too much in the real world! Frequently the argument order doesn't matter, but it will bite when we do it with a function that does care. It's sloppy, bad practice and if we don't make a habit of putting arguments in the right order, we stand to forget!\n",
"\n",
"Remember:\n",
"* argument order matters,\n",
"* check function syntax with `func?` in a code cell"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.8 Initial Models"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.8.1 Imputing missing feature (predictor) values"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recall when performing EDA, we imputed (filled in) some missing values in Pandas. We can impute missing values using scikit-learn, but we will prioritize imputation from a train split and apply that to the test split to then assess how well our imputation worked."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.1.1 Impute missing values with median"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We have missing values. Recall from our data exploration that many distributions were skewed. Our first thought might be to impute missing values using the median."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.1 Learn the values to impute from the train set"
]
},
{
"cell_type": "code",
"execution_count": 123,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"summit_elev 2175.000000\n",
"vertical_drop 750.000000\n",
"base_elev 1280.000000\n",
"trams 0.000000\n",
"fastSixes 0.000000\n",
"fastQuads 0.000000\n",
"quad 0.000000\n",
"triple 1.000000\n",
"double 1.000000\n",
"surface 2.000000\n",
"total_chairs 6.000000\n",
"Runs 29.000000\n",
"TerrainParks 2.000000\n",
"LongestRun_mi 1.000000\n",
"SkiableTerrain_ac 170.000000\n",
"Snow Making_ac 96.500000\n",
"daysOpenLastYear 107.000000\n",
"yearsOpen 57.000000\n",
"averageSnowfall 120.000000\n",
"projectedDaysOpen 112.000000\n",
"NightSkiing_ac 70.000000\n",
"resorts_per_state 15.000000\n",
"resorts_per_100kcapita 0.248243\n",
"resorts_per_100ksq_mile 24.428973\n",
"resort_skiable_area_ac_state_ratio 0.050000\n",
"resort_days_open_state_ratio 0.070595\n",
"resort_terrain_park_state_ratio 0.069444\n",
"resort_night_skiing_state_ratio 0.066804\n",
"total_chairs_runs_ratio 0.200000\n",
"total_chairs_skiable_ratio 0.040323\n",
"fastQuads_runs_ratio 0.000000\n",
"fastQuads_skiable_ratio 0.000000\n",
"dtype: float64"
]
},
"execution_count": 123,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# These are the values we'll use to fill in any missing values\n",
"X_defaults_median = X_train.median()\n",
"X_defaults_median"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.2 Apply the imputation to both train and test splits"
]
},
{
"cell_type": "code",
"execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
"X_tr = X_train.fillna(X_defaults_median)\n",
"X_te = X_test.fillna(X_defaults_median)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.3 Scale the data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we have features measured in many different units, with numbers that vary by orders of magnitude, start off by scaling them to put them all on a consistent scale. The [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) scales each feature to zero mean and unit variance."
]
},
{
"cell_type": "code",
"execution_count": 125,
"metadata": {},
"outputs": [],
"source": [
"#Call the StandardScaler`s fit method on `X_tr` to fit the scaler\n",
"#then use it's `transform()` method to apply the scaling to both the train and test split\n",
"#data (`X_tr` and `X_te`), naming the results `X_tr_scaled` and `X_te_scaled`, respectively\n",
"scaler = StandardScaler()\n",
"scaler.fit(X_tr)\n",
"X_tr_scaled = scaler.transform(X_tr)\n",
"X_te_scaled = scaler.transform(X_te)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.4 Train the model on the train split"
]
},
{
"cell_type": "code",
"execution_count": 126,
"metadata": {},
"outputs": [],
"source": [
"lm = LinearRegression().fit(X_tr_scaled, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.5 Make predictions using the model on both train and test splits"
]
},
{
"cell_type": "code",
"execution_count": 127,
"metadata": {},
"outputs": [],
"source": [
"#Call the `predict()` method of the model (`lm`) on both the (scaled) train and test data\n",
"#Assign the predictions to `y_tr_pred` and `y_te_pred`, respectively\n",
"y_tr_pred = lm.predict(X_tr_scaled)\n",
"y_te_pred = lm.predict(X_te_scaled)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.1.6 Assess model performance"
]
},
{
"cell_type": "code",
"execution_count": 128,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.8237204449411376, 0.7251410286259974)"
]
},
"execution_count": 128,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# r^2 - train, test\n",
"median_r2 = r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)\n",
"median_r2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recall that we estimated ticket prices by simply using a known average. As expected, this produced an $R^2$ of zero for both the training and test set, because $R^2$ tells us how much of the variance we've explaining beyond that of using just the mean. Here, we see that our simple linear regression model explains over 80% of the variance on the train set and over 70% on the test set. Clearly, we are onto something, although the much lower value for the test set is indicative of overfitting. This isn't a surprise as we've made no effort to select a parsimonious set of features or deal with multicollinearity in our data."
]
},
{
"cell_type": "code",
"execution_count": 129,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(8.495768235382354, 9.696652536263656)"
]
},
"execution_count": 129,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Now calculate the mean absolute error scores using `sklearn`'s `mean_absolute_error` function as we did above for R^2\n",
"# MAE - train, test\n",
"median_mae = mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)\n",
"median_mae"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using this model, then, on average we'd expect to estimate a ticket price within \\\\$9 or so of the real price. This is much, much better than the \\\\$19 from just guessing using the average. There may be something to this machine learning lark after all!"
]
},
{
"cell_type": "code",
"execution_count": 130,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(108.75554891895914, 157.57287631288543)"
]
},
"execution_count": 130,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# And also do the same using `sklearn`'s `mean_squared_error`\n",
"# MSE - train, test\n",
"median_mse = mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred)\n",
"median_mse"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.1.2 Impute missing values with the mean"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We chose to use the median for filling missing values because of the skew of many of our predictor feature distributions, let's try the mean."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.1 Learn the values to impute from the train set"
]
},
{
"cell_type": "code",
"execution_count": 131,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"summit_elev 4042.036269\n",
"vertical_drop 1057.264249\n",
"base_elev 2975.487047\n",
"trams 0.103627\n",
"fastSixes 0.093264\n",
"fastQuads 0.673575\n",
"quad 0.948187\n",
"triple 1.414508\n",
"double 1.746114\n",
"surface 2.476684\n",
"total_chairs 7.455959\n",
"Runs 41.387435\n",
"TerrainParks 2.447205\n",
"LongestRun_mi 1.301579\n",
"SkiableTerrain_ac 458.691099\n",
"Snow Making_ac 128.935294\n",
"daysOpenLastYear 109.761290\n",
"yearsOpen 56.895833\n",
"averageSnowfall 160.112903\n",
"projectedDaysOpen 114.900621\n",
"NightSkiing_ac 84.843478\n",
"resorts_per_state 16.523316\n",
"resorts_per_100kcapita 0.442984\n",
"resorts_per_100ksq_mile 42.862331\n",
"resort_skiable_area_ac_state_ratio 0.096680\n",
"resort_days_open_state_ratio 0.121639\n",
"resort_terrain_park_state_ratio 0.113116\n",
"resort_night_skiing_state_ratio 0.150272\n",
"total_chairs_runs_ratio 0.266321\n",
"total_chairs_skiable_ratio 0.070053\n",
"fastQuads_runs_ratio 0.010619\n",
"fastQuads_skiable_ratio 0.001700\n",
"dtype: float64"
]
},
"execution_count": 131,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# As we did for the median above, calculate mean values for imputing missing values\n",
"# These are the values we'll use to fill in any missing values\n",
"X_defaults_mean = X_train.mean()\n",
"X_defaults_mean"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By eye, we can immediately tell that our replacement values are much higher than those from using the median."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.2 Apply the imputation to both train and test splits"
]
},
{
"cell_type": "code",
"execution_count": 194,
"metadata": {},
"outputs": [],
"source": [
"X_tr = X_train.fillna(X_defaults_mean)\n",
"X_te = X_test.fillna(X_defaults_mean)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.3 Scale the data"
]
},
{
"cell_type": "code",
"execution_count": 195,
"metadata": {},
"outputs": [],
"source": [
"scaler = StandardScaler()\n",
"scaler.fit(X_tr)\n",
"X_tr_scaled = scaler.transform(X_tr)\n",
"X_te_scaled = scaler.transform(X_te)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.4 Train the model on the train split"
]
},
{
"cell_type": "code",
"execution_count": 196,
"metadata": {},
"outputs": [],
"source": [
"lm = LinearRegression().fit(X_tr_scaled, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.5 Make predictions using the model on both train and test splits"
]
},
{
"cell_type": "code",
"execution_count": 197,
"metadata": {},
"outputs": [],
"source": [
"y_tr_pred = lm.predict(X_tr_scaled)\n",
"y_te_pred = lm.predict(X_te_scaled)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### 4.8.1.2.6 Assess model performance"
]
},
{
"cell_type": "code",
"execution_count": 198,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.8221207605475709, 0.7290195691422242)"
]
},
"execution_count": 198,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)"
]
},
{
"cell_type": "code",
"execution_count": 137,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(8.510780313012354, 9.565093916371973)"
]
},
"execution_count": 137,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)"
]
},
{
"cell_type": "code",
"execution_count": 138,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(109.74247309324214, 155.34936226136)"
]
},
"execution_count": 138,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These results don't seem very different to when the one's we used with the median for imputing missing values. Perhaps it doesn't make much difference here. Maybe our overtraining is worse than we thought. Maybe other feature transformations, such as taking the log, would help. We could try with just a subset of features rather than using all of them as inputs.\n",
"\n",
"To perform the median/mean comparison, we copied and pasted a lot of code just to change the function for imputing missing values. It would make more sense to write a function that performed the sequence of steps:\n",
"1. impute missing values\n",
"2. scale the features\n",
"3. train a model\n",
"4. calculate model performance\n",
"\n",
"These are common steps, and `sklearn` provides something much better than writing custom functions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.8.2 Pipelines"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One of the most important and useful components of `sklearn` is the [pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). In place of Pandas's `fillna` DataFrame method, there is `sklearn`'s `SimpleImputer`. Remember the first linear model above performed the steps:\n",
"\n",
"1. replace missing values with the median for each feature\n",
"2. scale the data to zero mean and unit variance\n",
"3. train a linear regression model\n",
"\n",
"and all these steps were trained on the `train split` and then applied to the `test split` for assessment.\n",
"\n",
"The pipeline below defines exactly those same steps. Crucially, the resultant `Pipeline` object has a `fit()` method and a `predict()` method, just like the `LinearRegression()` object itself. Just as we might create a linear regression model and train it with `.fit()` and predict with `.predict()`, we can wrap the entire process of imputing and feature scaling and regression in a single object you can train with `.fit()` and predict with `.predict()`. And that's basically a pipeline: a model on steroids."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.2.1 Define the pipeline"
]
},
{
"cell_type": "code",
"execution_count": 139,
"metadata": {},
"outputs": [],
"source": [
"pipe = make_pipeline(\n",
" SimpleImputer(strategy='median'), \n",
" StandardScaler(), \n",
" LinearRegression()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 140,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"sklearn.pipeline.Pipeline"
]
},
"execution_count": 140,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"type(pipe)"
]
},
{
"cell_type": "code",
"execution_count": 141,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(True, True)"
]
},
"execution_count": 141,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hasattr(pipe, 'fit'), hasattr(pipe, 'predict')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.2.2 Fit the pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, a single call to the pipeline's `fit()` method combines the steps of learning the imputation (determining what values to use to fill the missing ones), the scaling (determining the mean to subtract and the variance to divide by), and then training the model. It does this all in the one call with the training data as arguments."
]
},
{
"cell_type": "code",
"execution_count": 142,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(steps=[('simpleimputer', SimpleImputer(strategy='median')),\n",
" ('standardscaler', StandardScaler()),\n",
" ('linearregression', LinearRegression())])"
]
},
"execution_count": 142,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Call the pipe's `fit()` method with `X_train` and `y_train` as arguments\n",
"pipe.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.2.3 Make predictions on the train and test sets"
]
},
{
"cell_type": "code",
"execution_count": 143,
"metadata": {},
"outputs": [],
"source": [
"y_tr_pred = pipe.predict(X_train)\n",
"y_te_pred = pipe.predict(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 4.8.2.4 Assess performance"
]
},
{
"cell_type": "code",
"execution_count": 144,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.8237204449411376, 0.7251410286259974)"
]
},
"execution_count": 144,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And compare with our earlier (non-pipeline) result:"
]
},
{
"cell_type": "code",
"execution_count": 145,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.8237204449411376, 0.7251410286259974)"
]
},
"execution_count": 145,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"median_r2"
]
},
{
"cell_type": "code",
"execution_count": 146,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(8.495768235382354, 9.696652536263656)"
]
},
"execution_count": 146,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compare with our earlier result:"
]
},
{
"cell_type": "code",
"execution_count": 147,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(8.495768235382354, 9.696652536263656)"
]
},
"execution_count": 147,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"median_mae"
]
},
{
"cell_type": "code",
"execution_count": 148,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(108.75554891895914, 157.57287631288543)"
]
},
"execution_count": 148,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_squared_error(y_train, y_tr_pred), mean_squared_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compare with our earlier result:"
]
},
{
"cell_type": "code",
"execution_count": 149,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(108.75554891895914, 157.57287631288543)"
]
},
"execution_count": 149,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"median_mse"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These results confirm the pipeline is doing exactly what's expected, and results are identical to our earlier steps. This allows we to move faster but with confidence."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.9 Refining The Linear Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We suspected the model was overfitting. This is no real surprise given the number of features we blindly used. It's likely a judicious subset of features would generalize better. `sklearn` has a number of feature selection functions available. The one we'll use here is `SelectKBest` which, as we might guess, selects the k best features. We can read about SelectKBest \n",
"[here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest). `f_regression` is just the [score function](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_regression.html#sklearn.feature_selection.f_regression) We're using because we're performing regression. It's important to choose an appropriate one for our machine learning task."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.1 Define the pipeline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Redefine our pipeline to include this feature selection step:"
]
},
{
"cell_type": "code",
"execution_count": 150,
"metadata": {},
"outputs": [],
"source": [
"pipe = make_pipeline(\n",
" SimpleImputer(strategy='median'), \n",
" StandardScaler(),\n",
" SelectKBest(score_func=f_regression),\n",
" LinearRegression()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.2 Fit the pipeline"
]
},
{
"cell_type": "code",
"execution_count": 151,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(steps=[('simpleimputer', SimpleImputer(strategy='median')),\n",
" ('standardscaler', StandardScaler()),\n",
" ('selectkbest',\n",
" SelectKBest(score_func=)),\n",
" ('linearregression', LinearRegression())])"
]
},
"execution_count": 151,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pipe.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.3 Assess performance on the train and test set"
]
},
{
"cell_type": "code",
"execution_count": 152,
"metadata": {},
"outputs": [],
"source": [
"y_tr_pred = pipe.predict(X_train)\n",
"y_te_pred = pipe.predict(X_test)"
]
},
{
"cell_type": "code",
"execution_count": 153,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.760478965339582, 0.681569974499793)"
]
},
"execution_count": 153,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)"
]
},
{
"cell_type": "code",
"execution_count": 154,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(9.757130441228263, 10.585905291034962)"
]
},
"execution_count": 154,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This has made things worse! Clearly selecting a subset of features has an impact on performance. `SelectKBest` defaults to k=10. Let's create a new pipeline with a different value of k:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.4 Define a new pipeline to select a different number of features"
]
},
{
"cell_type": "code",
"execution_count": 155,
"metadata": {},
"outputs": [],
"source": [
"# Modify the `SelectKBest` step to use a value of 15 for k\n",
"pipe15 = make_pipeline(\n",
" SimpleImputer(strategy='median'), \n",
" StandardScaler(),\n",
" SelectKBest(score_func=f_regression, k=15),\n",
" LinearRegression()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.5 Fit the pipeline"
]
},
{
"cell_type": "code",
"execution_count": 156,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Pipeline(steps=[('simpleimputer', SimpleImputer(strategy='median')),\n",
" ('standardscaler', StandardScaler()),\n",
" ('selectkbest',\n",
" SelectKBest(k=15,\n",
" score_func=)),\n",
" ('linearregression', LinearRegression())])"
]
},
"execution_count": 156,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pipe15.fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.6 Assess performance on train and test data"
]
},
{
"cell_type": "code",
"execution_count": 157,
"metadata": {},
"outputs": [],
"source": [
"y_tr_pred = pipe15.predict(X_train)\n",
"y_te_pred = pipe15.predict(X_test)"
]
},
{
"cell_type": "code",
"execution_count": 158,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.7922946911681397, 0.66079117939879)"
]
},
"execution_count": 158,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"r2_score(y_train, y_tr_pred), r2_score(y_test, y_te_pred)"
]
},
{
"cell_type": "code",
"execution_count": 159,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(9.214834764542976, 10.496823817105572)"
]
},
"execution_count": 159,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mean_absolute_error(y_train, y_tr_pred), mean_absolute_error(y_test, y_te_pred)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We could keep going, trying different values of k, training a model, measuring performance on the test set, and then picking the model with the best test set performance. There's a fundamental problem with this approach: _we're tuning the model to the arbitrary test set_! If we continue this way we'll end up with a model works well on the particular quirks of our test set _but fails to generalize to new data_. The whole point of keeping a test set is for it to be a set of unseen data on which to test performance.\n",
"\n",
"The way around this is a technique called _cross-validation_. We partition the training set into k folds, train our model on k-1 of those folds, and calculate performance on the fold not used in training. This procedure then cycles through k times with a different fold held back each time. Thus we end up building k models on k sets of data with k estimates of how the model performs on unseen data but without having to touch the test set."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.7 Assessing performance using cross-validation"
]
},
{
"cell_type": "code",
"execution_count": 160,
"metadata": {},
"outputs": [],
"source": [
"# Run 5-Fold Cross validation\n",
"cv_results = cross_validate(pipe15, X_train, y_train, cv=5)"
]
},
{
"cell_type": "code",
"execution_count": 161,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.60510478, 0.67731713, 0.75047442, 0.58935004, 0.50041885])"
]
},
"execution_count": 161,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# get scores\n",
"cv_scores = cv_results['test_score']\n",
"cv_scores"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Without using the same random state for initializing the CV folds, our actual numbers will be different."
]
},
{
"cell_type": "code",
"execution_count": 162,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(0.6245330431201284, 0.08445948393083175)"
]
},
"execution_count": 162,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.mean(cv_scores), np.std(cv_scores)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These results highlight that assessing model performance in inherently open to variability. We'll get different results depending on the quirks of which points are in which fold. An advantage of this is that you can also obtain an estimate of the variability, or uncertainty, in our performance estimate."
]
},
{
"cell_type": "code",
"execution_count": 163,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0.46, 0.79])"
]
},
"execution_count": 163,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.round((np.mean(cv_scores) - 2 * np.std(cv_scores), np.mean(cv_scores) + 2 * np.std(cv_scores)), 2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.9.8 Hyperparameter search using GridSearchCV"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Pulling the above together, we have:\n",
"* a pipeline that\n",
" * imputes missing values\n",
" * scales the data\n",
" * selects the k best features\n",
" * trains a linear regression model\n",
"* a technique (cross-validation) for estimating model performance\n",
"\n",
"Now we will use cross-validation for multiple values of k, and then use cross-validation to pick the value of k that gives the best performance. `make_pipeline` automatically names each step in lowercase. Parameters of each step are then accessed by appending a double underscore followed by the parameter name. We know the name of the step will be 'selectkbest', and we know the parameter is 'k'."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also list the names of all the parameters in a pipeline as follows:"
]
},
{
"cell_type": "code",
"execution_count": 164,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dict_keys(['memory', 'steps', 'verbose', 'simpleimputer', 'standardscaler', 'selectkbest', 'linearregression', 'simpleimputer__add_indicator', 'simpleimputer__copy', 'simpleimputer__fill_value', 'simpleimputer__missing_values', 'simpleimputer__strategy', 'simpleimputer__verbose', 'standardscaler__copy', 'standardscaler__with_mean', 'standardscaler__with_std', 'selectkbest__k', 'selectkbest__score_func', 'linearregression__copy_X', 'linearregression__fit_intercept', 'linearregression__n_jobs', 'linearregression__normalize', 'linearregression__positive'])"
]
},
"execution_count": 164,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Call `pipe`'s `get_params()` method to get a dict of available parameters and print their names\n",
"# using dict's `keys()` method\n",
"pipe.get_params().keys()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above can be particularly useful as our pipelines becomes more complex (we can even nest pipelines within pipelines)."
]
},
{
"cell_type": "code",
"execution_count": 165,
"metadata": {},
"outputs": [],
"source": [
"k = [k+1 for k in range(len(X_train.columns))]\n",
"grid_params = {'selectkbest__k': k}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we have a range of `k` to investigate. Is 1 feature best? 2? 3? 4? All of them? We could write a for loop and iterate over each possible value, doing all the housekeeping ourselves to track the best value of k. But this is a common task, so there's a built in function in `sklearn`. This is [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html).\n",
"\n",
"This takes the pipeline object, in fact it takes anything with a `.fit()` and `.predict()` method. In simple cases with no feature selection or imputation or feature scaling etc. we may see the classifier or regressor object itself directly passed into `GridSearchCV`. The other key input is the set of parameters and values to search over. Optional parameters include the cross-validation strategy and number of CPUs to use."
]
},
{
"cell_type": "code",
"execution_count": 166,
"metadata": {},
"outputs": [],
"source": [
"lr_grid_cv = GridSearchCV(pipe, param_grid=grid_params, cv=5, n_jobs=-1)"
]
},
{
"cell_type": "code",
"execution_count": 167,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GridSearchCV(cv=5,\n",
" estimator=Pipeline(steps=[('simpleimputer',\n",
" SimpleImputer(strategy='median')),\n",
" ('standardscaler', StandardScaler()),\n",
" ('selectkbest',\n",
" SelectKBest(score_func=)),\n",
" ('linearregression',\n",
" LinearRegression())]),\n",
" n_jobs=-1,\n",
" param_grid={'selectkbest__k': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,\n",
" 12, 13, 14, 15, 16, 17, 18, 19, 20,\n",
" 21, 22, 23, 24, 25, 26, 27, 28, 29,\n",
" 30, ...]})"
]
},
"execution_count": 167,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"lr_grid_cv.fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 168,
"metadata": {},
"outputs": [],
"source": [
"score_mean = lr_grid_cv.cv_results_['mean_test_score']\n",
"score_std = lr_grid_cv.cv_results_['std_test_score']\n",
"cv_k = [k for k in lr_grid_cv.cv_results_['param_selectkbest__k']]"
]
},
{
"cell_type": "code",
"execution_count": 169,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'selectkbest__k': 6}"
]
},
"execution_count": 169,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Print the `best_params_` attribute of `lr_grid_cv`\n",
"lr_grid_cv.best_params_"
]
},
{
"cell_type": "code",
"execution_count": 170,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA1cAAAHUCAYAAADWedKvAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAB94klEQVR4nO3deVhUZfsH8O8wzAw7yA6KiIggoKi4AeEuirtlWvZammZmm9nyE81M7c2tFFuwLJcsU3ozd1Kx3KVSEzfcN1BZBGURhFk4vz8GRkdAGZxhYPh+rutcnnnmnDP3YTgy9zzPuR+RIAgCiIiIiIiI6ImYGTsAIiIiIiIiU8DkioiIiIiISA+YXBEREREREekBkysiIiIiIiI9YHJFRERERESkB0yuiIiIiIiI9IDJFRERERERkR4wuSIiIiIiItIDJldERERERER6wOSKiPRq1apVEIlEmsXc3BxNmjTB2LFjcePGDc12Y8aMQbNmzQwej0gkwscff6x5vGfPHohEIuzZs8fgr91Q5Ofn47///S86dOgAOzs7yGQyNGvWDC+//DL+/fdfAMCwYcNgaWmJ3NzcKo/zwgsvQCKRIDMzs5YiNy6FQoGAgADMmzfP2KE8sWbNmmHgwIHGDsPgnn76aQwZMqRG+8bGxuLpp5+Gj48PRCIRunfvrt/goH4fxowZo3n8xx9/wMbGRuv/XiIyLCZXRGQQK1euRFJSEhITE/HKK69g7dq1iIyMRGFhIQBgxowZ2LBhQ63H1b59eyQlJaF9+/a1/tqm6NKlS2jXrh3mzZuHHj16YO3atdi5cydmzZqFzMxMhIaGIi8vD+PGjUNxcTF+/vnnSo+Tl5eHDRs2YODAgXBzc6vlszCOuLg43LlzB2+++aaxQ6FqKCwsxPbt2/HMM8/UaP9vvvkG165dQ8+ePeHi4qLn6CrXq1cvdOrUCdOmTauV1yMiwNzYARCRaQoODkaHDh0AAD169IBKpcKcOXOwceNGvPDCC/D19TVKXHZ2dujSpYtRXtvUqFQqDBs2DNnZ2UhKSkJwcLDmuW7duuGll17C77//DolEgujoaHh6emLFihWYNGlShWOtXbsW9+7dw7hx42rzFGrk3r17sLS0fKJjKJVKLFy4EC+//DKsra31EpdCodD0Fj+sqKgIVlZWNT62IAgoLi5+4vN+EoaIYdWqVRg7diwEQXjstgkJCVAqlRg0aFCNXislJQVmZurvtB+8Vgzt9ddfx8iRI/HJJ5/Ay8ur1l6XqKFizxUR1YryhObatWsAKh8WKBKJ8MYbb+Dbb79Fy5YtIZPJEBgYiHXr1lU4XkZGBl599VU0adIEUqkUPj4+mDVrFpRK5SPjqGxY4JgxY2BjY4OLFy+if//+sLGxgZeXF959912UlJRo7S+Xy/HJJ58gICAAMpkMLi4uGDt2LG7duvXYn0H565w9exZ9+/aFtbU1PDw8NMPC/vrrLzz11FOwtrZGy5Yt8cMPP9T4vGfNmoXOnTvD0dERdnZ2aN++PZYvX17hQ2T5cK7t27ejffv2sLS0REBAAFasWPHY89m4cSNOnjyJmJiYKj8sRkdHw8rKCmKxGC+99BKOHj2KkydPVthu5cqV8PDwQHR09CNf888//0T37t3h5OQES0tLNG3aFM888wyKioo025SUlGD27Nlo1aoVLCws4OTkhB49euDQoUOabYqLixETEwMfHx9IpVI0btwYr7/+eoVhi+U/n99++w3t2rWDhYUFZs2aBaDmv4MAsHnzZty4cQOjR4+u8NyFCxcwatQouLq6QiaToVWrVvj666+1tin/Pf7xxx/x7rvvonHjxpDJZLh48aLm9+zkyZOIioqCra0tevXqBQC4ffs2Jk2ahMaNG0MqlaJ58+aYPn16hd/z8mvxm2++QatWrSCTySr9fXzYhg0b0KZNG1hYWKB58+b44osvtJ4vLi7Gu+++i7Zt28Le3h6Ojo4ICwvDpk2bKhzrUTEsXboUISEhsLGxga2tLQICAgzeO7N+/Xr07NkTjRo1qtH+5YnV41y+fBnPPfccPD09IZPJ4Obmhl69eiE5OVmzjUKhwAcffAB3d3dYWVnhqaeewj///FPp8QYNGgQbGxt89913NYqbiHTDnisiqhUXL14EgMcOh9m8eTN2796N2bNnw9raGnFxcXj++edhbm6O4cOHA1B/qO3UqRPMzMzw0UcfwdfXF0lJSfjkk09w9epVrFy5Uuf4FAoFBg8ejHHjxuHdd9/Fvn37MGfOHNjb2+Ojjz4CAJSWlmLIkCHYv38/PvjgA4SHh+PatWuYOXMmunfvjiNHjjz2W3WFQoGnn34aEydOxPvvv4+ff/4ZMTExyM/Px/r16/F///d/aNKkCb788kuMGTMGwcHBCA0N1fm8r169ildffRVNmzYFoE7c3nzzTdy4cUNzPuWOHz+Od999F1OnToWbmxu+//57jBs3Di1atEDXrl2rPJedO3cCAIYOHVqtn/HLL7+MefPmYcWKFVi8eLGmPSUlBf/88w+mTp0KsVhc5f5Xr17FgAEDEBkZiRUrVsDBwQE3btzA9u3bIZfLYWVlBaVSiejoaOzfvx+TJ09Gz549oVQq8ddffyE1NRXh4eEQBAFDhw7FH3/8gZiYGERGRuLEiROYOXMmkpKSkJSUBJlMpnndf//9F2fOnMGHH34IHx8fWFtbP/Hv4LZt2+Dq6orAwECt9pSUFISHh6Np06b4/PPP4e7ujh07duCtt95CdnY2Zs6cqbV9TEwMwsLC8M0338DMzAyurq4A1F8CDB48GK+++iqmTp0KpVKJ4uJi9OjRA5cuXcKsWbPQpk0b7N+/H3PnzkVycjK2bdumdeyNGzdi//79+Oijj+Du7q45dlWSk5MxefJkfPzxx3B3d8eaNWvw9ttvQy6X47333gOgTnxv376N9957D40bN4ZcLseuXbvw9NNPY+XKlXjxxRcfG8O6deswadIkvPnmm/jss89gZmaGixcvIiUl5ZHxPYni4mJs27YNixYtMthrlOvfvz9UKhUWLFiApk2bIjs7G4cOHdJK/F955RWsXr0a7733Hvr06YNTp07h6aefRkFBQYXjSaVShIeHY9u2bZg9e7bB4ydq8AQiIj1auXKlAED466+/BIVCIRQUFAhbt24VXFxcBFtbWyEjI0MQBEF46aWXBG9vb619AQiWlpaabQRBEJRKpRAQECC0aNFC0/bqq68KNjY2wrVr17T2/+yzzwQAwunTp7WOOXPmTM3j3bt3CwCE3bt3a9peeuklAYDwyy+/aB2vf//+gr+/v+bx2rVrBQDC+vXrtbY7fPiwAECIi4t75M+m/HUe3F+hUAguLi4CAOHff//VtOfk5AhisViYMmVKjc77QSqVSlAoFMLs2bMFJycnobS0VPOct7e3YGFhoXXMe/fuCY6OjsKrr776yPPp16+fAEAoLi5+5HYP6tatm+Ds7CzI5XJN27vvvisAEM6fP//IfX/99VcBgJCcnFzlNqtXrxYACN99912V22zfvl0AICxYsECrPT4+XgAgLFu2TNPm7e0tiMVi4dy5c1rb1vS9KNeqVSuhX79+Fdr79u0rNGnSRMjLy9Nqf+ONNwQLCwvh9u3bgiDc/z3u2rVrhWOU/56tWLFCq/2bb76p9Pd8/vz5AgBh586dmjYAgr29veb1Hsfb21sQiUQV3ps+ffoIdnZ2QmFhYaX7KZVKQaFQCOPGjRPatWun9VxVMbzxxhuCg4NDteKq6vXKl+XLlwsAtNoUCoWgUqm09tu4caMgFouFrKysGr3uw4KCgoRu3bpVaM/OzhYACLGxsVXue+bMGQGA8M4772i1r1mzRgAgvPTSSxX2mT59umBmZibcvXv3SUMnosfgsEAiMoguXbpAIpHA1tYWAwcOhLu7O37//ffHFivo1auX1jZisRgjR47ExYsXcf36dQDA1q1b0aNHD3h6ekKpVGqW8iFle/fu1TlekUhU4V6KNm3aaIYxlr+ug4MDBg0apPW6bdu2hbu7e7UqEIpEIvTv31/z2NzcHC1atICHhwfatWunaXd0dISrq2uF16/uef/555/o3bs37O3tIRaLIZFI8NFHHyEnJwdZWVlaMbVt21bTwwUAFhYWaNmypdZr68u4ceOQnZ2NzZs3A1Dfe/TTTz8hMjISfn5+j9y3bdu2kEqlmDBhAn744Qdcvny5wja///47LCws8PLLL1d5nD///BMAtKqqAcCzzz4La2tr/PHHH1rtbdq0QcuWLbXanvR38ObNmxV6goqLi/HHH39g2LBhml648qV///4oLi7GX3/9pbXPo4orPPzcn3/+CWtra00PcLnyn8PD563rELigoCCEhIRotY0aNQr5+fmaqpEA8L///Q8RERGwsbGBubk5JBIJli9fjjNnzlQ4ZmUxdOrUCbm5uXj++eexadMmZGdnVztGX19fSCQSzVJ+j9+DbRKJpEIPz/r16xEZGanpeRcEQev9qc5Q0OpwdHSEr68vFi5ciEWLFuHYsWMoLS3V2mb37t0A1NU1HzRixIhK77cDAFdXV5SWliIjI0MvcRJR1ZhcEZFBrF69GocPH8axY8dw8+ZNnDhxAhEREY/dz93dvcq2nJwcAEBmZia2bNlS4QNRUFAQAOj0YauclZUVLCwstNpkMhmKi4s1jzMzM5GbmwupVFrhtTMyMqr1upW9jlQqhaOjY4VtpVJphdevznn/888/iIqKAgB89913OHjwIA4fPozp06cDUBdkeJCTk1OF15bJZBW2e1h5QnblypVHbveg4cOHw97eXjNsLiEhAZmZmdUqZOHr64tdu3bB1dUVr7/+Onx9feHr64slS5Zotrl16xY8PT0feX9LTk4OzM3NKwxRFYlEcHd31/yelfPw8KhwjCf9Hbx3716F34OcnBwolUp8+eWXFY5bnpA/fNzKYgPUv2d2dnYVju/u7g6RSKTV7urqCnNz82qd96NU59r97bffMGLECDRu3Bg//fQTkpKScPjwYbz88stav+uPimH06NFYsWIFrl27hmeeeQaurq7o3LkzEhMTHxvjli1bcPjwYc1SPszywbbDhw9jwoQJmn0UCgW2bNmilazu3bu3wnt09erVx77+44hEIvzxxx/o27cvFixYgPbt28PFxQVvvfWWZshf+c/y4Z+3ubl5pdcyAM3v2uOuaSJ6crzniogMolWrVppqgbqo7JvV8rbyDw7Ozs5o06YN/vvf/1Z6DE9PT51ftzqcnZ3h5OSE7du3V/q8ra2tQV73wdevznmvW7cOEokEW7du1foAv3HjRr3G07dvXyxbtgwbN27E1KlTq7WPpaUlnn/+eXz33XdIT0/HihUrYGtri2effbZa+0dGRiIyMhIqlQpHjhzBl19+icmTJ8PNzQ3PPfccXFxccODAAZSWllaZYDk5OUGpVOLWrVtaCZYgCMjIyEDHjh21tn84GQGe/HfQ2dkZt2/f1mpr1KgRxGIxRo8ejddff73S/Xx8fB4bW1XtTk5O+PvvvyEIgtbzWVlZUCqVcHZ2rtaxq1Kda/enn36Cj48P4uPjtY7/cEGNx8UwduxYjB07FoWFhdi3bx9mzpyJgQMH4vz58/D29q4yxtatW2s9PnXqFAA88v+qXbt2IS8vD8OGDdO0hYaG4vDhw1rb6ev/HW9vbyxfvhwAcP78efzyyy/4+OOPIZfL8c0332h+lhkZGWjcuLFmP6VSWSFBLlf+u/bwe0xE+sfkiojqlD/++AOZmZmaoYEqlQrx8fHw9fVFkyZNAAADBw5EQkICfH19a1y5qyYGDhyIdevWQaVSoXPnzrX2ug++fnXOu7wc94PFIe7du4cff/xRr/EMGTIErVu3xty5czFw4MBKKwbu2LEDkZGRWmXAx40bh2+++QYLFy5EQkICxowZo3OZcLFYjM6dOyMgIABr1qzBv//+i+eeew7R0dFYu3YtVq1aVeXQwF69emHBggX46aef8M4772ja169fj8LCQk1lvUd50t/BgIAAXLp0SavNysoKPXr0wLFjx9CmTRtIpVKdj/sovXr1wi+//IKNGzdqJQqrV6/WPP8kTp8+jePHj2sNDfz5559ha2urmVdOJBJBKpVqJU0ZGRmVVgusDmtra0RHR0Mul2Po0KE4ffr0I5Ormli/fj26dOmilcjY2trW6MsjXbVs2RIffvgh1q9frxlaWT758Jo1azTFbgDgl19+qXJ44uXLl+Hk5NRg5pAjMiYmV0RUpzg7O6Nnz56YMWOGplrg2bNntcqxz549G4mJiQgPD8dbb70Ff39/FBcX4+rVq0hISMA333yjScT06bnnnsOaNWvQv39/vP322+jUqRMkEgmuX7+O3bt3Y8iQIVofWvWtuuc9YMAALFq0CKNGjcKECROQk5ODzz77TKsCnj6IxWJs2LABUVFRCAsLw2uvvYYePXrA2toa165dw6+//ootW7bgzp07Wvt16NABbdq0QWxsLARBqPbcVt988w3+/PNPDBgwAE2bNkVxcbGmZHzv3r0BAM8//zxWrlyJiRMn4ty5c+jRowdKS0vx999/o1WrVnjuuefQp08f9O3bF//3f/+H/Px8REREaKoFtmvXrtLy6A970t/B7t27Y/bs2RXmn1qyZAmeeuopREZG4rXXXkOzZs1QUFCAixcvYsuWLZr7xWrixRdfxNdff42XXnoJV69eRevWrXHgwAF8+umn6N+/v+ZnWFOenp4YPHgwPv74Y3h4eOCnn35CYmIi5s+frznH8rL2kyZNwvDhw5GWloY5c+bAw8MDFy5cqNbrvPLKK7C0tERERAQ8PDyQkZGBuXPnwt7evkKv45NSqVTYtGlTtXtmH+XIkSOaoYP5+fkQBAG//vorAKBjx47w9vbGiRMn8MYbb+DZZ5+Fn58fpFIp/vzzT5w4cUITQ6tWrfCf//wHsbGxkEgk6N27N06dOoXPPvuswlDQcn/99Re6deumc28kEdWAUctpEJHJKa8WePjw4UduV1W1wNdff12Ii4sTfH19BYlEIgQEBAhr1qypsP+tW7eEt956S/Dx8REkEong6OgohIaGCtOnT9eqiIVqVgu0trau8BozZ84UHv5vUqFQCJ999pkQEhIiWFhYCDY2NkJAQIDw6quvChcuXHjsOVf2Ot26dROCgoIqtHt7ewsDBgyo0XmvWLFC8Pf3F2QymdC8eXNh7ty5mspoV65ceeRrlMdUWTWzyuTm5gpz5swR2rdvL9jY2AgSiURo2rSp8J///Ec4ePBgpfssWbJEACAEBgZW6zUEQRCSkpKEYcOGCd7e3oJMJhOcnJyEbt26CZs3b9ba7t69e8JHH30k+Pn5CVKpVHBychJ69uwpHDp0SGub//u//xO8vb0FiUQieHh4CK+99ppw584drWNV9fMRhOq/F5W5ePGiIBKJKlTuEwRBuHLlivDyyy8LjRs3FiQSieDi4iKEh4cLn3zyiWab8t/j//3vfxX2r+r3TBDUVSgnTpwoeHh4CObm5oK3t7cQExNToeJj+bVYXeU/p19//VUICgoSpFKp0KxZM2HRokUVtp03b57QrFkzQSaTCa1atRK+++67Sq+1qmL44YcfhB49eghubm6CVCoVPD09hREjRggnTpyodrzlyv+/qsquXbsEAMLly5d1PvbDyqs4VrasXLlSEARByMzMFMaMGSMEBAQI1tbWgo2NjdCmTRth8eLFglKp1ByrpKREePfddwVXV1fBwsJC6NKli5CUlCR4e3tXqBZ48eLFSqucEpFhiAShGtOSExHVApFIhNdffx1fffWVsUMhMrjyqpO///67sUOhKkyaNAl///03jh49auxQamzGjBlYvXo1Ll26VGU1QSLSH15lRERERjB37ly0a9cOhw8f1vtwNtKPuLg4Y4fwRHJzc/H111/jyy+/ZGJFVEtYip2IiMgIgoODsXLlSs49RAZz5coVxMTEYNSoUcYOhajB4LBAIiIiIiIiPWDPFRERERERkR4wuSIiIiIiItIDoydXcXFx8PHxgYWFBUJDQ7F///5Hbr9mzRqEhITAysoKHh4eGDt2bIUZydevX4/AwEDIZDIEBgZiw4YNhjwFIiIiIiIi495zFR8fj9GjRyMuLg4RERH49ttv8f333yMlJQVNmzatsP2BAwfQrVs3LF68GIMGDcKNGzcwceJE+Pn5aRKopKQkREZGYs6cORg2bBg2bNiAjz76CAcOHEDnzp2rFVdpaSlu3rwJW1tbTrhHRERERNSACYKAgoICeHp6wszsMX1TRpxjS+jUqZMwceJErbaAgABh6tSplW6/cOFCoXnz5lptX3zxhdCkSRPN4xEjRgj9+vXT2qZv377Cc889V+240tLSqpzojwsXLly4cOHChQsXLg1vSUtLe2weYbRJD+RyOY4ePYqpU6dqtUdFReHQoUOV7hMeHo7p06cjISEB0dHRyMrKwq+//ooBAwZotklKSsI777yjtV/fvn0RGxtbZSwlJSUoKSnRPBbKOvPS0tJgZ2en66kRaRMEIC9PvW5vD7A3lIiIiKjeyM/Ph5eXF2xtbR+7rdGSq+zsbKhUKri5uWm1u7m5VTnnR3h4ONasWYORI0eiuLgYSqUSgwcPxpdffqnZJiMjQ6djAuqJHGfNmlWh3c7OjskVPTm5HFi8WL0+bRoglRo3HiIiIiLSWXVuFzJ6QYuHgxQEocrAU1JS8NZbb+Gjjz7C0aNHsX37dly5cgUTJ06s8TEBICYmBnl5eZolLS2thmdDREREREQNldF6rpydnSEWiyv0KGVlZVXoeSo3d+5cRERE4P333wcAtGnTBtbW1oiMjMQnn3wCDw8PuLu763RMAJDJZJDJZE94RkRERERE1JAZredKKpUiNDQUiYmJWu2JiYkIDw+vdJ+ioqIKFTrEYjGA+/dJhYWFVTjmzp07qzwmERERERGRPhit5woApkyZgtGjR6NDhw4ICwvDsmXLkJqaqhnmFxMTgxs3bmD16tUAgEGDBuGVV17B0qVL0bdvX6Snp2Py5Mno1KkTPD09AQBvv/02unbtivnz52PIkCHYtGkTdu3ahQMHDhjtPImIiIiIyPQZNbkaOXIkcnJyMHv2bKSnpyM4OBgJCQnw9vYGAKSnpyM1NVWz/ZgxY1BQUICvvvoK7777LhwcHNCzZ0/Mnz9fs014eDjWrVuHDz/8EDNmzICvry/i4+OrPccVERERERFRTRh1EuG6Kj8/H/b29sjLy2O1QHpycjnw6afqdVYLJCIiIqpXdMkNjNpzRdQgmJkBHTveXyciIiIik8TkisjQzM2BBya6JiIiIiLTxK/RiYiIiIiI9IA9V0SGJghAUZF63coKqMbs3kRERERU/7DnisjQFApg4UL1olAYOxoiIiIiMhAmV0RERERERHrA5IqIiIiIiEgPmFwRERERERHpAZMrIiIiIiITVSRXotnUbWg2dRuK5Epjh2PymFwREREREVGdUZ8TQiZXRERERESkk/qcABkS57kiMjQzM6Bt2/vrRERERA8okisR+NEOAEDK7L6wkvIjen3Fd47I0MzNgaFDjR0FERGRyWOSQsbGr9GJiIiIwGFORPTkmM4TGZogAAqFel0iAUQi48ZDRERERAbBnisiQ1MogE8/VS/lSRYREVEDVh97CetjzFT7mFwRERERGRg/mBM1DEyuiIiIiIiI9IDJFREREVE9xR4xorqFyRUREREREZEeMLkiIiIiIiLSAyZXREREVG9wGBwR1WWc54rI0MzMgMDA++tERDookisR+NEOAEDK7L6wkvJPN9UO/u4R6Y5XCZGhmZsDI0YYOwoiogr44ZmISL/4NToRERHpFYfuEVFDxeSKiIiIiIhID9j/T2Rocjnw6afq9WnTAKnUuPEQERERkUGw54qIiIiIiEgPmFwRERERERHpAZMrIiIiIiIiPWByRUREREREpAdMroiIiIiIiPSAyRUREREREZEeGD25iouLg4+PDywsLBAaGor9+/dXue2YMWMgEokqLEFBQZptVq1aVek2xcXFtXE6RBWZmQF+furFzOiXHBEREREZiFE/6cXHx2Py5MmYPn06jh07hsjISERHRyM1NbXS7ZcsWYL09HTNkpaWBkdHRzz77LNa29nZ2Wltl56eDgsLi9o4JaKKzM2BF15QL+acWo6IiIjIVBk1uVq0aBHGjRuH8ePHo1WrVoiNjYWXlxeWLl1a6fb29vZwd3fXLEeOHMGdO3cwduxYre1EIpHWdu7u7rVxOkRERERE1IAZLbmSy+U4evQooqKitNqjoqJw6NChah1j+fLl6N27N7y9vbXa7969C29vbzRp0gQDBw7EsWPHHnmckpIS5Ofnay1ERERERES6MFpylZ2dDZVKBTc3N612Nzc3ZGRkPHb/9PR0/P777xg/frxWe0BAAFatWoXNmzdj7dq1sLCwQEREBC5cuFDlsebOnQt7e3vN4uXlVbOTIqqMXA7897/qRS43djREREREZCBGv7teJBJpPRYEoUJbZVatWgUHBwcMHTpUq71Lly74z3/+g5CQEERGRuKXX35By5Yt8eWXX1Z5rJiYGOTl5WmWtLS0Gp0LUZUUCvVCRERERCbLaHfXOzs7QywWV+ilysrKqtCb9TBBELBixQqMHj0aUqn0kduamZmhY8eOj+y5kslkkMlk1Q+eiIiIiIjoIUbruZJKpQgNDUViYqJWe2JiIsLDwx+57969e3Hx4kWMGzfusa8jCAKSk5Ph4eHxRPESERERERE9ilHrQk+ZMgWjR49Ghw4dEBYWhmXLliE1NRUTJ04EoB6ud+PGDaxevVprv+XLl6Nz584IDg6ucMxZs2ahS5cu8PPzQ35+Pr744gskJyfj66+/rpVzIiIiIiKihsmoydXIkSORk5OD2bNnIz09HcHBwUhISNBU/0tPT68w51VeXh7Wr1+PJUuWVHrM3NxcTJgwARkZGbC3t0e7du2wb98+dOrUyeDnQ0REREREDZfRZzSdNGkSJk2aVOlzq1atqtBmb2+PoqKiKo+3ePFiLF68WF/hERERERERVYvRkysikycSAc2a3V8nIqMokisR+NEOAEDK7L6wkvJPIBER6Rf/shAZmkQCjBlj7CiIiIiIyMCMPs8VERERERGRKWByRURE9ISK5Eo0m7oNzaZuQ5FcaexwiIjISJhcUb1SLz/AyOXAggXqRS43djREREREZCC854qoNjyiwiURERERmQb2XBEREREREekBkysiIiIiIiI9YHJFRERERESkB0yuiIiIiIiI9IDJFRERERERkR6wWiCRoYlEgKfn/XUiIiIiMklMrogMTSIBJkwwdhREREREZGAcFkgEw05OXC8nPiYiIiIinTG5IiIiIiIi0gMOCyQyNIUCLx/eVLbeE5DysiMiIiIyRfyUR2RoggC7kruadSIiIiIyTRwWSEREREREpAdMroiIiIiIiPSAyRUREREREZEeMLkiIpPBsvdERERkTEyuiIiIiIiI9IDVAokMTSRCjpW9Zp2IiIiITBOTKyJDk0jwY/uBAIAYicTIwRARERGRoXBYIBERERERkR4wuSIiIiIiItIDDgskMjSFAqP/3Vq23hOQ8rIjIiIiMkX8lEdkaIIAp6I8zToRERERmSYOCyQiIiIiItIDJldERFSncDJoIiKqr5hcEVGl+AGXiIiISDdMroiIjMxQiawhE2Qm30RERBUxuSIiIiIiItIDoydXcXFx8PHxgYWFBUJDQ7F///4qtx0zZgxEIlGFJSgoSGu79evXIzAwEDKZDIGBgdiwYYOhT4OoaiIR8mU2yJfZACKRsaMhIiIiIgMxanIVHx+PyZMnY/r06Th27BgiIyMRHR2N1NTUSrdfsmQJ0tPTNUtaWhocHR3x7LPParZJSkrCyJEjMXr0aBw/fhyjR4/GiBEj8Pfff9fWaRFpk0iwouMQrOg4BJBIjB0N1RCHwREREdHjGDW5WrRoEcaNG4fx48ejVatWiI2NhZeXF5YuXVrp9vb29nB3d9csR44cwZ07dzB27FjNNrGxsejTpw9iYmIQEBCAmJgY9OrVC7GxsbV0VkRERERE1BAZLbmSy+U4evQooqKitNqjoqJw6NChah1j+fLl6N27N7y9vTVtSUlJFY7Zt2/fRx6zpKQE+fn5WgtRfcDeFCIiIqK6w9xYL5ydnQ2VSgU3Nzetdjc3N2RkZDx2//T0dPz+++/4+eeftdozMjJ0PubcuXMxa9YsHaIn0oFCgeeTt5et9wSkRrvsiIiIiMiAjF7QQvTQDf6CIFRoq8yqVavg4OCAoUOHPvExY2JikJeXp1nS0tKqFzxRdQgC3O7mwO1uDiAIxo6GiIiIiAzEaF+hOzs7QywWV+hRysrKqtDz9DBBELBixQqMHj0aUqlU6zl3d3edjymTySCTyXQ8AyIiIiIiovuM1nMllUoRGhqKxMRErfbExESEh4c/ct+9e/fi4sWLGDduXIXnwsLCKhxz586djz0mERERERHRkzDqzR9TpkzB6NGj0aFDB4SFhWHZsmVITU3FxIkTAaiH6924cQOrV6/W2m/58uXo3LkzgoODKxzz7bffRteuXTF//nwMGTIEmzZtwq5du3DgwIFaOSciIiIiImqYjJpcjRw5Ejk5OZg9ezbS09MRHByMhIQETfW/9PT0CnNe5eXlYf369ViyZEmlxwwPD8e6devw4YcfYsaMGfD19UV8fDw6d+5s8PMhIiIiIqKGy+hlyyZNmoRJkyZV+tyqVasqtNnb26OoqOiRxxw+fDiGDx+uj/CoBorkSgR+tAMAkDK7L6xYHY+IiIiIGgB+6iWqBfckLJhCREREZOqYXBEZmlSKbzure1Lffqi6JRERERGZDqPPc0VERERERGQKmFwRERERERHpAYcFEhmaQoHhJ3eVrfcEGniBDxY8ISIiIlPFTzVEhiYIaJKXqVknIiIiItPEYYFERERERER6wOSKiIiIiIhID5hcERERERER6QGTKyIiIiIiIj1gckVERERERKQHrBZIVAsUZrzUiIiIiEwdP/ERGZpUiq/DRwIAXpdKjRwMERERERkKhwUSERERERHpAZMrIiIiIiIiPeCwQCJDUyox5PTusvVegJSXHREREZEp4qc8IkMrLYXPnZuadSIiIiIyTRwWSEREREREpAdMroiIiIiIiPSAyRUREREREZEeMLkiIiIiIiLSAyZXREREREREesDkioiIiIiISA9Yip3I0KRSxD71AgBgglRq5GCIiIiIyFDYc0VERERERKQHTK6IiIiIiIj0gMMCiQxNqcSAM/vL1nsBUl52RERERKZIp095586dw9q1a7F//35cvXoVRUVFcHFxQbt27dC3b18888wzkMlkhoqVqH4qLYVfTqpmnYiIiIhMU7WGBR47dgx9+vRBSEgI9u3bh44dO2Ly5MmYM2cO/vOf/0AQBEyfPh2enp6YP38+SkpKDB03ERERERFRnVKtnquhQ4fi/fffR3x8PBwdHavcLikpCYsXL8bnn3+OadOm6S1IIiIiIiKiuq5aydWFCxcgrUYJ6bCwMISFhUEulz9xYERERERERPVJtYYFViexepLtiYiIiIiI6rtq9Vx98cUX1T7gW2+9VeNgiIiIiIiI6qtqJVeLFy/Wenzr1i0UFRXBwcEBAJCbmwsrKyu4urrqnFzFxcVh4cKFSE9PR1BQEGJjYxEZGVnl9iUlJZg9ezZ++uknZGRkoEmTJpg+fTpefvllAMCqVaswduzYCvvdu3cPFhYWOsVGRERERERUXdVKrq5cuaJZ//nnnxEXF4fly5fD398fgLpE+yuvvIJXX31VpxePj4/H5MmTERcXh4iICHz77beIjo5GSkoKmjZtWuk+I0aMQGZmJpYvX44WLVogKysLSqVSaxs7OzucO3dOq42JFRmNRIKvw0YAACZIJEYOhoiIiIgMRefZTGfMmIFff/1Vk1gBgL+/PxYvXozhw4fjhRdeqPaxFi1ahHHjxmH8+PEAgNjYWOzYsQNLly7F3LlzK2y/fft27N27F5cvX9ZULWzWrFmF7UQiEdzd3XU8MyIDEYmgEKuTqrxiJaTmYpiLq3W7o9EIgoDCEuXjNyQiIiIiDZ2Tq/T0dCgUigrtKpUKmZmZ1T6OXC7H0aNHMXXqVK32qKgoHDp0qNJ9Nm/ejA4dOmDBggX48ccfYW1tjcGDB2POnDmwtLTUbHf37l14e3tDpVKhbdu2mDNnDtq1a1dlLCUlJVpzc+Xn51f7PIge59ClHM162Nw/AQAWEjPYyCSwtTCHtUwMG5k5bGQS2MjEsLEwh7XMHLYyc9jIytYt1M9by8Rl+6ifEz3wOkpVKfKKFLgrV6KwRIm7JUrcLb6/rmkrUaFQ6/GD26jU/8qVEIT7x76YdRdtmjjUzg+MiIiIqJ7SObnq1asXXnnlFSxfvhyhoaEQiUQ4cuQIXn31VfTu3bvax8nOzoZKpYKbm5tWu5ubGzIyMird5/Llyzhw4AAsLCywYcMGZGdnY9KkSbh9+zZWrFgBAAgICMCqVavQunVr5OfnY8mSJYiIiMDx48fh5+dX6XHnzp2LWbNmVTt2ouooKFbg04QzWPtPmrpBEACROh0qVpSiWFGC7Lv6m3C7zaxEvR3rYcPiDuGlsGaY3McPdhYc2khERERUGZ2TqxUrVuCll15Cp06dICm7f0SpVKJv3774/vvvdQ5AJBJpPRYEoUJbudLSUohEIqxZswb29vYA1EMLhw8fjq+//hqWlpbo0qULunTpotknIiIC7du3x5dfflll1cOYmBhMmTJF8zg/Px9eXl46nwtRub3nbyFm/QnczCsGAITcPIeIq8mYsCUOgrlU02NU3rtU5br8flthiRIFDzyvKhUqfW2JWKTp8bJ5oPdL/a9Y00Nm/UCvmLX0ge3LetPMRECHT/4AAKhKBaw4eAWbj9/AB/0CMLx9E5iZVX6dEhERETVUOidXLi4uSEhIwPnz53H27FkIgoBWrVqhZcuWOh3H2dkZYrG4Qi9VVlZWhd6sch4eHmjcuLEmsQKAVq1aQRAEXL9+vdKeKTMzM3Ts2BEXLlyoMhaZTAaZTKZT/ESVybunwCdbU/C/o9cBAE0drTC7f0skT1wDAJCKzWBlLUUj6yebC04QBJQoS5FVUIyuC/YAAA7+Xw8428ogMxc/0bHLFcnv33O1bHQo5m0/i8u3CvHBryfw89+pmDU4CCFeDnp5LX1QlQrYlXJ/aPLV7EIEeto/Yg8iIjJlP/51DRbm6i8LRSIRzEQimIkAM5EIorJ/zcwAER54XLaN6IFty7eRq1TGPiWqB3ROrso1a9YMgiDA19cX5ua6H0YqlSI0NBSJiYkYNmyYpj0xMRFDhgypdJ+IiAj873//w927d2FjYwMAOH/+PMzMzNCkSZNK9xEEAcnJyWjdurXOMRLp4o8zmZi24SQy80sgEgFjwpvh/b7+gFyOZD2/lkgkgoVEDGeb+18KNLKW6i2xethTfs7Y7t8Vqw5dwZJdF5CclouhcQcxItQL7/fz14qjthXJlfj16HWsOHAFV3OKNO3PfJOEOUOC8Uz7xlX2hhOR/hUrVNh3/pbm8f4LtxAV6M7rkAzudqEcsbvOax7PTThrsNeatSUFHw5oBVsOlaeH6JwVFRUV4c0338QPP/wAQJ3cNG/eHG+99RY8PT0rFKh4lClTpmD06NHo0KEDwsLCsGzZMqSmpmLixIkA1MP1bty4gdWrVwMARo0ahTlz5mDs2LGYNWsWsrOz8f777+Pll1/WFLSYNWsWunTpAj8/P+Tn5+OLL75AcnIyvv76a11PlahacovkmL0lBb8duwEA8HG2xsLhbdChmbqiZZFcbszw9EZqboYJXX0xtG1jzPv9LH47dgPxR9KQcCodU/q0xOgu3rVaBTErvxg/JF3FT3+lIu+eusiOnaU58u+pe9zuyVV473/HceDCLcwZGsw/gEQGdDW7EHvOZWH3uVv463IOSpSlmude/fFfdGrmiPf6+qOTj6MRoyRTVViixPIDV7Bs32XcfaDSbXSwO8xEIpQKQtmi/tJdEKB5XKr1+P42lf2rLC3FmfQCAED84TTsO38LnwwNRq9WlY+4ooZJ5+QqJiYGx48fx549e9CvXz9Ne+/evTFz5kydkquRI0ciJycHs2fPRnp6OoKDg5GQkABvb28A6sqEqampmu1tbGyQmJiIN998Ex06dICTkxNGjBiBTz75RLNNbm4uJkyYgIyMDNjb26Ndu3bYt28fOnXqpOupEj3WjtMZ+HDjKdwqKIGZCBgf2RxT+rSEhcQwPUh1gaudBRaNbIsXujTFR5tO4/TNfMzakoK1/6Ti48FBCPd1Nujrn83Ix3f71Pd/KVTq+868naww7ikf9G/trrlP7K1eLfD17kvYmHwTx9Jy8eXz7VjxkEhPihUq/HU5B3vO3cKec1lavcYA4G5vgYyye05l5mb45+ptjPg2Cd1auuD9vv4Ibswhu/Tk5MpSrDucii/+uKgpEBXgbouzGeoE6PMRIbCS1niQVgVFciUCP9oBAPBytETa7XsY98MRDArxxMxBgUYdxUF1h86/cRs3bkR8fDy6dOmi1cUfGBiIS5cu6RzApEmTMGnSpEqfW7VqVYW2gIAAJCZWXRVt8eLFWLx4sc5xEOnidqEcMzefxpbjNwEAvi7WWPhsCNo3bWTkyGpPqLcjNr/xFOIPp2HhjrM4n3kXo777GwNae2DagFZo7GD5+INUkyAI2HchG9/vv4z9F7I17R28G2F8ZHP0CXSD2EykdZ/YxG6+6NbSBW+tTca1nCI8s/QQPugbgHFP+bAYB1ENXMsp1CRTSZdzUKy43ztlbiZCx2aO6O7vgh4BrmjsYIGgmTsBANsnR+L7/VcQfzgNe8/fwt7zt9C/tTum9GmJFq62xjodqsdKSwVsOXETn+88j9Tb6sS+qaMV3o1qiV4Brgj+eKfBY9g4KQLL9l3Gd/svY8vxmzhw4RY+GhSIoW05FL2h0zm5unXrFlxdXSu0FxYW8peJGoSEk+mYsfEUcgrlMBMBr3bzxdu9/Ey6t6oqYjMRRnVuiv6t3bEo8Tx++usatp1Mxx9nM/F69xZ4pWvzJ/q5lChV2JR8E8v3X8G5TPU3kWYiIDrYA+MjfdDuMclsqLcjEt6KxNTfTuD3Uxn4b8IZHLiYjc+eDYGLLb9hJHqUYoUKf1+5jT3nsrD33C1czi7Uet7D3gLd/V3QraUrIlo4aQ29ffCLDjc7C/x3WGtM6NocsbsuYGPyDSSczMD2Uxl4un0TvN3LD16OVrV2XlR/CYKAvedvYcH2c0hJV89J6mwjw9u9WmBkx6aQmptp/e4ZkqVUjJj+rTCgjQc++PUEzmYU4J3449iUfBOfDA1Gk0b8nW6odE6uOnbsiG3btuHNN98EcL+U+nfffYewsDD9RkdUh2TfLcFHm04h4aS6wmVLNxssHB7y+Ip5Egm+7fQMAGCCxDTv+3GwkmL2kGA817EpPt5yGv9cuY3PE8/jl6NpmDEgEH0C3XT68uVOoRxr/r6GH5Ku4VaBeqiHtVSMER298HKEj04fxOytJIh7oT3W/pOGWVtOY+/5W4hesh+LR4Yg0s9F53MlMmWpOUXYcz4Le87dwqFL2RV6pzo0a4Tu/q7o7u8Cfzdbna5rbydrLB7ZFhO7+eLzneewMyUTvx69jk3JNzCqU1O83rMFXG0tDHFaZAKOpd7B/O1n8dfl2wAAW5k5JnRtjpef8oG1TH9D/3TVpokDtrz5FJbtu4wlf1zAnnO3ELV4Hz7o648Xw5pxpEQDpPNv49y5c9GvXz+kpKRAqVRiyZIlOH36NJKSkrB3715DxEhkVIIgYMuJdMzcdAp3ihQQm4kwqbsv3ujZonrV+UQi3JNaaNZNWaCnHeIndMGWE+n4dNsZpN2+hwk/HkXXli6YOSgQvi42j9z/SnYhlh+4jF+PXtd8qHO3s8CYiGZ4vlNT2FvWLDkVidQ9bB2aNcKbPx/DucwCjF7+D17t1hzvRflDUouFOMh4buTe06z3XrQXTRpZoUkjS/W/DpaadXd7C0jNjfs7cU+uQlZBMTLzS5B25/79TCsOXIGFRKwpFy02E0EkEkFsdr98tLisdPT9stIiiM3ul6IuXxeLRJCr7idPA744gCsP9U652cnQoyyZimjhrJfCMP7utlj2Ygckp+Xisx3ncOBiNn5Iuob4I2kYE+6Did2aw8HqyaarINNxMesuFu44ix2n1VNtSMVmeDHMG5N6tIDjE05roi8SsRle79ECfYPcEfPbCRy+egcfb0nBlhPpmP9Maw5/bWB0Tq7Cw8Nx6NAhLFy4EL6+vti5cyfat2+PpKQkljsnk5NVUIwPN5zCzrL5kwLcbfHZsyG8GfsRRCIRBod4oleAK77efRHf77+CfedvoV/sPrwc4YPxkT5a2wuCgMNX7+C7/Zex60wmhLK5kYM87fBKZHP0b+2htw+6Ld1ssemNCMzZmoI1f6fi272X8dfl2/jyuXZo6sQhHKYqv1iBr3dfxMqDVzVtN3OLcTO3GP9cqbi9SKRO6ps0skRjB0tNEta4LPnydLCo8bQHxQoVsvJLkFlQjMz8Ys16Vn6JJpnKzC9GQXHlQ5s+23m+0nZ9uJJdCLGZCKHejTQJVYC7br1Tumjr5YCfxnfGoUvZWLjjHI6l5uKbvZew5q9rmNC1OcY+5QMbI/ZIkHGl591DbOIF/O9oGkoF9ZDwp9s3wTt9Wur1nl59auFqg/gJYVjz9zXM+/0sjl67g/5LDuCNni0wsZuv0b+0odqh0/9aCoUCEyZMwIwZMzSl2IlMkSAI2HDsBmZtSUHePQXMzUR4o2cLTOreQvf/HJVK9Lh0uGy9F6DHykV1mbXMHB/0C8CIDl6YvTUFf57Nwrf7LmtK1gPA7yfTsTrpGo5fz9O09QxwxfhIH4Q1dzLIhzoLiRj/HdYaT7Vwxv+tP4HjabkY8MV+/Pfp1hgc4qn31yPjUahK8fPfqVjyxwXcLtSeEuHn8Z1x624Jrt+5hxu593D9zj1cv1OEG3fuoURZivS8YqTnFeMw7lR6bFdbmaanq3EjS7g+cA/f9lMZyC1SILOgGLceSJ4y84uRX0XSVBkLiRnc7CzgbCPD0WvqOIa09YQIQKkAqARBXSK69P66qvR+eenSsuceLDGtKhU0ZaVVpQJUpaU4l3kXABA7MgQ9W7nBrpanLQj3dcZvrznhz7NZWLjjHM5mFODzxPNYdegqJvVogRc6N22Q97Q2VLlFcizdcwmrDl3VlPTvE+iG9/v6o6Vb3e8BMjMTYXRYM/Rq5YYPN57Cn2ezsCjxPBJOpmP+M20efysB1Xs6fcqTSCTYsGEDZsyYYah4iIwuM78Yn2w9gz/OZgFQ96AsHB6CQE+7mh2wtBQh6ec16w1NM2drrBjTEX+ezcTsLSlaJZvf/d8JAOo5tJ5p3xjjnvKpteET0a090MbLAW+vPYYj1+7grbXHsP/8LcwaEqTX0r1U+wRBQGJKJub9flZThMHXxRrvRrXEpDXHAABtmzpU+j4LgoDsu/KyhKtInXyVJ15lSViRXIWsghJkFZTg39TcCseY8svxR8ZXnjS52VrAxU4GN1sLuNnJ4Fq27mpnAVc7GWxl5hCJRFrln+c+3dpgpaWjgtyN9rsvEonQq5Ubevi7YuvJdCzaeQ5Xc4owZ2sKvt9/GW/38sMzoU04hNeE3ZOrsOLgFXyz95Km57ZTM0f8X7Q/Qr3r3/xong6WWP5SB2w+fhOztqTgbEYBhsUdxMsRPpgS1ZJ/Z0yYzu/ssGHDsHHjRkyZMsUQ8RA90uVbdzXrkQt2w1IiVi9SMSwk6sVSYqZpk5mr/7WUiGFR1m5Rvr35/f1EIkFz3MFfHURBsRISsQhv9/LDq918+QddD3oGuCGihTO+2XMJi3ddAAA0spLgxbBmGB3mbZT5QRo7WGLdhC744o8L+HL3Rfzv6HUcTb2DL59vhyBPDv2sj05cz8V/t53B31fUN707WUsxuU9LPN/RS+v+oqqIRCK42MrgYitD20q+YRYEAXeKFJqEq7zn61pOIXafuwUAaN/UAR4OlmWJkgxumqRJBlc7C03SRBWZmamHFUcHu2P90etY8scFpOcVY+pvJ/Htvst4p09L9PRnIRpTE384DUv3XEJWwf25qj7o548e/q71+loRiUQY0rYxIv1cMHvLaWxMvonvD1zBjpQMzHu6DSJaGG5eSEEQcCP3Hs6mF+DEjVxN+83ce7wHzMB0Tq5atGiBOXPm4NChQwgNDYW1tbXW82+99ZbegiN62M//pGnWc+7KH7FlzRUUK9GmiT0WDg+Bvzv/A9InmbkYr3Rtrkmu/ni3GxytjVsS3VxshilR/gjzdcY78cm4fKsQw74+hJj+ARgT3qxe/2FvSG7k3sNnO85hQ9mwU5m5GcZH+mBiN19NEYbqJFePIxKJ4GgthaO1FK2b3E/AH+wB+ml8Z34r/YQkYjM816kphrZrjDV/pyJu90VcyS7EW2uPwd/t0YVxqH4QhPtfas7akgIAaNLIEu9GtcTgkMYQm1CVPUdrKWKfa4chbRtj+oaTSLt9Dy98/zdGdGiC6f0DYW/1ZENxC4oVOJ9ZgDPpBTibkY+z6QU4l1GAgpKKw5B7L9qH1o3t0TfIDVFB7vBzteHfOT3T+X//77//Hg4ODjh69CiOHj2q9ZxIJGJyRQajUJVi+6kMzeONr4dDEIB7ChWKFSoUK0pxT67SPL4nV6FYqcI9eWklbWWPy/YrkiuRXZasTenjh0ndW8CcvVUGV5fuowjzdULC25H44Nfj2HUmC7O2pODgxWwsGB5SZypSUUUFxQrE7bmE5QeuQF52f8bT7Rrj3b7+dfamd6o+C4kY457ywXMdvbDy4BV8u++y5h4xABj57V9wsJLA3lJ7sbOspM1CAlsLc5bGrgOOp+Xi482nNY8draV4s2cLjOrctMbFYuqDHgGu2DmlGxZsP4vVSdfwy5Hr2H3uFmYPDkK3avTIKlWluJpThLMZ+TiXcT+Zun7nXqXbS8QitHC1RQtXa2w5ng5AXbDn5I08nLyRh892noePszWigtwQFeiOdl4OvD70QOfk6sqVSkorEdWCgxeztW5Kb+lmq7dvhx/81nl8ZHMmVg2Uo7UU373YAT8cuopPE85i15ksRC/Zh9iR7RDixWGCdYlCVYp1/6QidtcF5JT9v9DZxxEfDgjU6lEi02AtM8cbPf3wny7e+KqsCimg/pCoC5FIPT+SvdX9hOvBBKwufeFjijLzizF/+1n89u8NrfYdkyPh0kDmOLORmWP2kGAMDvHE/60/gUu3CvHamn/RJ9BVa7ucuyU4m1GAM+nqROpsRgHOZxZoinw8zMPeAgHutvB3t0MrD1sEuNuhuYs1JGL1xMrlydW+97vj4MUc7EzJxIEL2biSXYhv917Gt3svw8VWhj6Bbugb5I6w5k6sblhDHLdA9cam5JvGDoEaAJFIhDERPujk44Q31v6Ly7cKMer7v/Bq1+bGDo2gHkr0x5kszP39DC7dUheraO5ijWnRrdCrVf2+P4Mez8FKiil9WmqSq69GtUOJohR59xSaJb/832KFVnuxohSCAOQXK5FfrEQaKv+2v9zzy/7CC128MbCNB4d5PqFihQrf7buMuD2XcE+hAqCufFn+d92YkwAbS4dmjtj2ViS++vMivtl7CYkpWZrnui7YrRlN8zBLiRj+7rYIKF887BDgblvtueGcbGR4rlNTPNepKe6WKLHnXBZ2ns7E7rNZuFVQgp//TsXPf6fCVmaOHgGu6Bvkjm7+LpwWQQc1+kldv34dmzdvRmpqKuRy7Td/0aJFegmM6EFFciV2nM54/IZEehLoaYetbz6Fjzefxi9HruObvZc1z13IugtvRyvYW0r4Yb4WnbqRh/9uO4OkyzkA1D2N7/T2w3OdmrLoTAPVM8C12olPiVJVlnwptZKwB5OynEK55r6949fzcPzXE5izJQVD2zXG852a1rxqbAMlCAK2nkjHvN/Paibxbt/UAR8NCkJLN5sG/6WphUSM9/r6o39rD3zw63GcupkPAMi+K4dIBHg7WpUlUvd7o5o6Wult6J6NzBwD23hiYBtPlChVSLqk7tFKTMnErYISbD5+E5uP34TU3AxPtXBG3yA39GrlZpQCVPWJzsnVH3/8gcGDB8PHxwfnzp1DcHAwrl69CkEQ0L59e0PESITElEwUyVXwamSJtCrGFtdZEglWdBgCAJggqd35Y+jJWEnNsWB4CJ7yc8G0307ibtnNwUO+OghAXULexUZdQtvVVgZXWwv1v3bqdZeydSdrmUndnF3bbpYVqyifI01qboZxT/ngte6+tT4nE9VfMnMxXG3FeFShtCK5UpNcTenjh1+P3kDq7SL8+Nc1/PjXNbT1csCoTk0xMIS9WY9z4nouZm9JwZGyOdo87S3wf9EBGBziqZligNQCPe3w8yud0WZWIgBg7Sud0aaJQ6326MnMxeju74ru/q74ZEgwjqXlYufpDOw4nYGrOUX482wW/jybBTPRSXTwdkRUkBu6tjRctcP6TOd3LSYmBu+++y5mz54NW1tbrF+/Hq6urnjhhRfQr18/Q8RIhM1l324NaOOh1YNQL4hEyLew0axT/TM4xBP+bjboG7sfAGBnaY78e0rIlaW4kXtP841sVcRmIjhZSzVJlzoRk8HFTr1uZ8EPaVWJ3XUBPzwwmeiwdo3xblRLNGlkZeTIyNSNj2yON3r44dClHKz9JxU7TmcgOS0XyWm5mL01BUPbeeL5Tk05bcNDMvOLsXDHOfx69DoA9TC2id18MaFrc1hKeU9bVR681zvEq/J5+GqLmZkIod6NEOrdCFOjA3Ah6y52nMrAjpQMnLqRj3+u3sY/V28D2+7v89flHHRp7sQvHVCD5OrMmTNYu3atemdzc9y7dw82NjaYPXs2hgwZgtdee03vQVLDdrtQjr3n1fPHDKyPyRWZBC/H+x/m/4rpBTORCLfKJpK9VVCsnlQ2vwRZWuslyCksgapU0Ew6C+Q/8nVGfvsXWnnYoqWbevF3t4WrrazBDD8UBAHX79yfaHrZPvX13snHER8OaIU2TRyMFBk1RGZmIjzl54yn/Jxxq6AE6/+9jrX/pOJaThF++isVP/2VipAm9ni+U1MMCvFskPcOlStWqLD8wBV8vfsiiuTq+6qGtWuMD/r5w8OelTvrK5FIpPl79GYvP9zIvafp0frnym2UllXUf3nVEYjNRAjytEMHb0d0bNYIoc0awbWBFCp5kM7/C1hbW6OkRD3Jm6enJy5duoSgoCAAQHZ2tn6jIwKw7WQ6lKUCghvboblLPZzfRKVC5JV/y9Z7g3VkTIOFRAwvRyutpKsySlUpcgrllSRexZqEKzO/GBl5xQDul8h9kL2lBC3dbDR/4NSLDZxMYNx7XpECyddzcbysR+B4Wq6m+h8ANHOywrT+rdAn0K3BJJhUN7nYytQ9MJHN8dflHPxc1pt1/Hoejl8/iTlbUzCkXWOM6tQUwY0bTm+WIAhIOJmBTxPOaHrx2zV1wEcDA9GuaSMjR0f61tjBEmMjfDA2wgc37hQhYv5uAIC7nQUy8otx4noeTlzPw4qD6qIz3k5W6ODtiA7NGqFjs0bwdTH9ebV0/pTXpUsXHDx4EIGBgRgwYADeffddnDx5Er/99hu6dOliiBipgdtUNv59SEhjI0dSQyoVQm+c0axTw2IuNoObnQXc7CwAVP6B68GpABaNCMG1nCKczyzAucwCXM0uRN49BQ5fvYPDV+9o7edsI9VKuPzdbeDnZltn70OSK0txJj1fM7TqeFouLmcXVtjOXCyCUqX+OnTTGxGwt+Q8Y1R3mJmJEN7CGeEtnJFzt7w3Kw1Xsgs1ldZaN7bHqM7q3ixTrrJ26kYeZm9JUQ8Rg7oc+NQH7qsi09bogTkg/3yvG+4UKXDk6m0cuXoHh6/exrnMAlzLKcK1nCKs/1c9TNTBSoIO3o3QoZkjOng3Qusm9iY3t5nOV/yiRYtw9656Ar+PP/4Yd+/eRXx8PFq0aIHFixfrPUBq2NJuF+HItTsQiYBBIZ7GDofI4PoFu2uNWS9WqHD5ViHOZxY8sNxF6u0iZN+VI/tuDg5dytE6hoe9BVq62cLH2VrTdjWnEI3trWptElVBEHAtp0iTSCWn5SLlZj7kqopztDRzskJbLweEeDmgrZcDfJyt0Xa2+sZuVgGkuszJRoYJXX3xSmRzJF3Owdp/0rD9VDpO3shDzG8n8cnWFAxuq+7N8nW1fvwB64msgmIs3H4Ov/57HYIAWEjM8GpXX7zarTnvuWnAGjtYonHbxhjSVv1leH6xAv9eu4Oj19TJVnJaLnKLFNh1Jgu7zqhLz0vNzdCmsT06NCsbSujdqNpl5esqna+A5s3vz/ViZWWFuLg4vQZE9KDNx9WFLMKaO8Hd3oLVhajBsZCIEehpV6EEdGGJEhez7moSrnOZd3EhswDpecWapfxeRQDov+QAAMBMBDSyksLBSgJHaykcrKRwtJLCwVoCRyspGllJ0chaikZWkrJ/pbC3lDy22uGdQjmSr+ciObWsV+q6+o/owxpZSTRJVFsvB4Q0cdD69hMAr3Oqd0QiEcJ9nRHu64ycu4H47d8bWPtPKi5nF2LtP6lY+08qAj3uX8P/XLmtmbTYUiKGhUQMC4kZLCRiyMzN6myvT/l9VXG7L6Kw7L6qoW098UG/AHg68L4q0mZnIdFUIATUk7+fvpmv6d06cu02su/KceTaHRy5dgff7FXv5+dqg7ZNHTTHEQTBCNHXHL9eoDpLEARsSlYPCRzatp4OCSQyEGuZOULKenselHdPgYtZBTiXcRenb+Zhzd+pAAArqRhFchVKBSCnUI6cQrlmEt7HEYnU9305PpCU2TxQ4bBv7D6k3a5YMVFqboYgTztNItXWywFNHa3q7AdHIn1wspHhla7NMT7SB39fuY21/6Ti95MZSEm/X8xmzMrDVe4vEgEyc7MHki51wmUpFcPCXJ2Ela/LJGJIxPevpx//ugZHKylsZOawsTBX//vAurW05j3XO05n4POd53G9bDqUEC8HzBwUiPa8r4qqSSI20/wtGB95f4TD4QeSrUu3CnEh6y4uZN3V7HcmowAdvB2NGLludE6uzMwe/Y2KiveUkJ6cSVcPf5KKzdA32N3Y4RDVC/aWEoR6OyLU2xFFcqUmuTryYW+IzUTILVLgTpEctwvlyC1SlP0rx+1CBXKL5OrnihRlbXIUFCshCEBukaLSnigAmsSquYu1ViIV4G4HqTmH9VHDJBKJ0KW5E7o0d8LMQXKs+ycVC3acAwA0d7ZGibIUJUoVihWluKdQQVVWdk0QgGJFKYoVpQAqv+aqMjfh7GO3sZaKYV2WcNnKzNXrVSRj0geG5b4TfxyAunDB/0X7Y0hI41oZYkymSyQSoZmzNZo5W+PZDl4AgJy7JTh67Q7+upyDFQevAgBautavYmY6J1cbNmzQeqxQKHDs2DH88MMPmDVrlt4CIyrvteoZ4Ap7y7p5gz5RfSIzF8PNTlxWXKN6FKrSssRKnWzdKUvOsvKLsXjXBQDA9y+GomMzJ9hb8TolqoyjtRRjIpppkqutbz1V4d4khaoUxQpVWWKlur+uVOGevOyx8sHn1M8XFCvw3X51ZbboYHcUK1S4W6JEQbEShXIl7hYrcbdECUVZkZhCuQqFclXZ1BDVJzM3w6vdfDGR91WRATnZyBAV5I6n/Jw1yZV5Pbv3VuerY8iQIRXahg8fjqCgIMTHx2PcuHF6CYwattJSQXO/1dB2LGRBZCwSsRlcbGVwsdUu+14kV2qSq/AWzvywRfSEJGIzSMRm0HVaoCK5UpNcfT4ipMprsUSpwt1iJQpLVCgoUajX5eok7G6JEoUl6kSsoHy9RIm8IgUOlhXM2fbWU2jhavtE50jUEOjtr2Hnzp3xyiuv6Otw1MD9c/U20vOKYWthrrkRst6SSPBjuwEAgAkSfrNPRES1T2YuhsxGDCcdRlg9OE0EC1YQVY9ekqt79+7hyy+/RJMmTfRxOCLNkMD+wR6wkNTz+Q9EIuRYO2jWiYiIiMg06ZxcNWrUSKughSAIKCgogJWVFX766Se9BkcNU4lShW0n0gEAQ9pySCARERER1Q86J1eLFy/WSq7MzMzg4uKCzp07o1EjluOkJ7fn3C3kFyvhZidD5+ZOxg7nyalU6JJ6omy9NzgDAhEREZFp0vlT3pgxYwwQBtF9m5PVhSwGh3g+duLSekGlQpfUk5p1IiIiIjJNOidXJ06cqPa2bdq00fXw1MAVFCuw60wmAGAIJw4mIiIionpE5+Sqbdu2j5xEGFDfhyUSiTihMOls+6kMlChL4etijSBPO2OHQ0RERERUbTrPyvXbb7/Bx8cHcXFxOHbsGI4dO4a4uDj4+vpi/fr1uHz5Mq5cuYLLly8bIl4ycZvKhgQObdv4sUk8EREREVFdonPP1aeffoovvvgC/fv317S1adMGXl5emDFjBo4eParXAKnhyMovxqFL2QA4JJCIiIiI6h+de65OnjwJHx+fCu0+Pj5ISUnRS1DUMG05kY5SAWjf1AFNnayMHQ4RERERkU50Tq5atWqFTz75BMXFxZq2kpISfPLJJ2jVqpXOAcTFxcHHxwcWFhYIDQ3F/v37H7l9SUkJpk+fDm9vb8hkMvj6+mLFihVa26xfvx6BgYGQyWQIDAzEhg0bdI6Lal/5xMFD27HXioiIiIjqH52HBX7zzTcYNGgQvLy8EBISAgA4fvw4RCIRtm7dqtOx4uPjMXnyZMTFxSEiIgLffvstoqOjkZKSgqZNm1a6z4gRI5CZmYnly5ejRYsWyMrKglKp1DyflJSEkSNHYs6cORg2bBg2bNiAESNG4MCBA+jcubOup0u15PKtuzhxPQ9iMxH6t/Ywdjj6ZW6OtSF9AQATzDnHFREREZGp0vmTXqdOnXDlyhX89NNPOHv2LARBwMiRIzFq1ChYW1vrdKxFixZh3LhxGD9+PAAgNjYWO3bswNKlSzF37twK22/fvh179+7F5cuX4ejoCABo1qyZ1jaxsbHo06cPYmJiAAAxMTHYu3cvYmNjsXbtWl1Pl2rJxrJCFpF+znC2kRk5Gj0zM0OmrbNmnYiIiIhMU42+RreyssKECROe6IXlcjmOHj2KqVOnarVHRUXh0KFDle6zefNmdOjQAQsWLMCPP/4Ia2trDB48GHPmzIGlpSUAdc/VO++8o7Vf3759ERsbW2UsJSUlKCkp0TzOz8+v4VnVH0VyJQI/2gEASJndF1ZS4/WoCIKAzeVDAlnIgoiIiIjqKZ2/Rv/hhx+wbds2zeMPPvgADg4OCA8Px7Vr16p9nOzsbKhUKri5uWm1u7m5ISMjo9J9Ll++jAMHDuDUqVPYsGEDYmNj8euvv+L111/XbJORkaHTMQFg7ty5sLe31yxeXl7VPg96csev5+FqThEsJWL0CXR7/A71jUqF0OspCL2eAnDuNyIiIiKTpXNy9emnn2r1En311VdYsGABnJ2dK/QYVcfDcxmVT0BcmdLSUohEIqxZswadOnVC//79sWjRIqxatQr37t2r0TEB9dDBvLw8zZKWlqbzeVDNbTym7rXqE+gGa5kJ3pOkUiHy6jFEXj3G5IqIiIjIhOn8STYtLQ0tWrQAAGzcuBHDhw/HhAkTEBERge7du1f7OM7OzhCLxRV6lLKysir0PJXz8PBA48aNYW9vr2lr1aoVBEHA9evX4efnB3d3d52OCQAymQwymYnd51NPKFWl2HqibOLgdp5GjoaIiIiIqOZ07rmysbFBTk4OAGDnzp3o3bs3AMDCwkKr9+hxpFIpQkNDkZiYqNWemJiI8PDwSveJiIjAzZs3cffuXU3b+fPnYWZmhiZNmgAAwsLCKhxz586dVR6TjOvQpRxk35XD0VqKSD8XY4dT71hJzXF13gBcnTfAqPfNEREREVENkqs+ffpg/PjxGD9+PM6fP48BAwYAAE6fPl2hct/jTJkyBd9//z1WrFiBM2fO4J133kFqaiomTpwIQD1c78UXX9RsP2rUKDg5OWHs2LFISUnBvn378P777+Pll1/WDFV8++23sXPnTsyfPx9nz57F/PnzsWvXLkyePFnXU6VasLGskMWA1h6QiFlJj4iIiIjqL50/zX799dcICwvDrVu3sH79ejg5OQEAjh49iueff16nY40cORKxsbGYPXs22rZti3379iEhIQHe3t4AgPT0dKSmpmq2t7GxQWJiInJzc9GhQwe88MILGDRoEL744gvNNuHh4Vi3bh1WrlyJNm3aYNWqVYiPj+ccV3XQPbkKO06ph3BySCARERER1Xc6jyNycHDAV199VaF91qxZNQpg0qRJmDRpUqXPrVq1qkJbQEBAhWF/Dxs+fDiGDx9eo3io9uw6k4lCuQpNGlmifdNGxg6HiIiIiOiJPNFNGq1bt0ZCQgJLl1ONbCobEjikrecjqzmScZTfz0VERERE1fNEydXVq1ehUCj0FQs1IHcK5dhz7haABjBxsLk5fm2tLvwywZxFJ4iIiIhMFT/pkVEknEqHslRAoIcd/NxsjR2OYZmZ4bq9m2adiIiIiEyTTp/0lEolZs2apZlkNzIyUlOlj0gXm46p57Ya0paFLIiIiIjINOiUXJmbm2PhwoVQqVQAgISEBHh4eBgkMDJd1+8U4Z+rtyESAYMbQnKlUiHk5jmE3DwHlF07RERERGR6dB6j1Lt3b+zZs8cAoVBDseV4OgCgs48jPOwbQM+nSoUel4+gx+UjTK6IiIiITJjO91xFR0cjJiYGp06dQmhoKKytrbWeHzx4sN6CI9NUXiXQ5AtZEBEREVGDonNy9dprrwEAFi1aVOE5kUikGTJIVJmzGfk4m1EAqdgM0cEcUkpEREREpkPn5Kq0tNQQcVADsbGskEV3fxfYW0mMHA0RERERkf6wFDvVmtJSAZvLhwS245DAhoqTExMREZGpqlZBi3Xr1lX7gGlpaTh48GCNAyLTdeTaHdzMK4atzBw9A1yNHQ4RERERkV5VK7launQpAgICMH/+fJw5c6bC83l5eUhISMCoUaMQGhqK27dv6z1Qqv82lvVa9Qt2h4VEbORoiIiIiIj0q1rDAvfu3YutW7fiyy+/xLRp02BtbQ03NzdYWFjgzp07yMjIgIuLC8aOHYtTp07B1ZW9EqRNrixFwkl1CfYhDa1KoLk5NgV2BwBMMOdIXCIiIiJTVe1PegMHDsTAgQORk5ODAwcO4OrVq7h37x6cnZ3Rrl07tGvXDmZmOk+bRQ3E3vO3kFukgKutDGG+TsYOp3aZmeGKY2PNOhERERGZJp2/RndycsKQIUMMEQuZsPK5rQaFeEJsJjJyNERERERE+scxSmRwd0uU2HUmE0ADnThYpUJg5mXNOi87IiIiItPET3lkcDtOZaBYUYrmztYIbmxn7HBqnZVYhATvHPUDMXvtDIll3omIiMiYeAMIGVx5lcAhbRtDJGJyQURERESmickVGdStghIcvJgNABjS1tPI0RARERERGU6Nkyu5XI5z585BqVTqMx4yMdtPZ6BUANp6OaCZs7WxwyEiIiIiMhidk6uioiKMGzcOVlZWCAoKQmpqKgDgrbfewrx58/QeINVvW4+r57Yayl4rIiIiIjJxOidXMTExOH78OPbs2QMLCwtNe+/evREfH6/X4Kj+O3kjD2IzEQa0YXJFRERERKZN52qBGzduRHx8PLp06aJVnCAwMBCXLl3Sa3BkGiJaOMPFVmbsMIiIiIiIDErn5OrWrVtwdXWt0F5YWMhKcFSpBj8k0NwcePbZ++tEREREZJJ0HhbYsWNHbNu2TfO4PKH67rvvEBYWpr/IyCRYSMwQFeRu7DCMy8wMCApSL2Ys0ElERERkqnT+Gn3u3Lno168fUlJSoFQqsWTJEpw+fRpJSUnYu3evIWKkeqyHvytsZOytISIiIiLTp/PX6OHh4Th06BCKiorg6+uLnTt3ws3NDUlJSQgNDTVEjFSPqEoF7ErJ1Dwe2MbDiNHUEaWlwOnT6qW01NjREBEREZGB6NSloFAoMGHCBMyYMQM//PCDoWKiBxTJlQj8aAcAIGV2X1hJ62YvULFChV+PXsfyA1dwJbtQ0x7RwtmIUdURSiXwv/+p16dNA6RS48ZDRERERAahU8+VRCLBhg0bDBUL1UO3C+VYsusCIub9iQ83nsKV7ELYWd5PAKXmvMeIiIiIiBoGnT/5Dhs2DBs3bjRAKFSfXMspxIyNpxA+7w8s3nUeOYVyNGlkiZmDAvHHlG7GDo+IiIiIqNbpPMasRYsWmDNnDg4dOoTQ0FBYW1trPf/WW2/pLTiqe46l3sGyfZex/XQGBEHdFtzYDhO6+qJ/sDvMxWYokiuNGyQRERERkRHonFx9//33cHBwwNGjR3H06FGt50QiEZMrE1RaKuCPs1n4bt9l/HP1tqa9u78LJnRtjrDmTpzjjIiIiIgaPJ2TqytXrhgiDqqDihUqbDx2A9/tv4xLt9RFKiRiEYa0bYxXIpvD393WyBESEREREdUdT1RtQBAECOVjw2ooLi4OPj4+sLCwQGhoKPbv31/ltnv27IFIJKqwnD17VrPNqlWrKt2muLj4ieJsSHKL5Pjqzwt4av5uTP3tJC7dKoStzByvdmuO/R/0xGfPhjCxIiIiIiJ6SI3qeq9evRoLFy7EhQsXAAAtW7bE+++/j9GjR+t0nPj4eEyePBlxcXGIiIjAt99+i+joaKSkpKBp06ZV7nfu3DnY2dlpHru4uGg9b2dnh3Pnzmm1WVhY6BRbQ5R2uwjLD1xB/OE03FOoAACe9hZ4+SkfjOzoBVsLiZEjrKfEYmDo0PvrRERERGSSdE6uFi1ahBkzZuCNN95AREQEBEHAwYMHMXHiRGRnZ+Odd97R6Vjjxo3D+PHjAQCxsbHYsWMHli5dirlz51a5n6urKxwcHKp8XiQSwd3dvdpxNHSnb+bhh0PXkHAyHaVlHZGtPOwwoasPBrbxhETMcupPRCwG2rY1dhT0hKyk5rg6b4CxwyAiIqI6TOfk6ssvv8TSpUvx4osvatqGDBmCoKAgfPzxx9VOruRyOY4ePYqpU6dqtUdFReHQoUOP3Lddu3YoLi5GYGAgPvzwQ/To0UPr+bt378Lb2xsqlQpt27bFnDlz0K5duyqPV1JSgpKSEs3j/Pz8ap2DqXj2m78065F+zpjQtTmeauHMIhVERERERDrQOblKT09HeHh4hfbw8HCkp6dX+zjZ2dlQqVRwc3PTandzc0NGRkal+3h4eGDZsmUIDQ1FSUkJfvzxR/Tq1Qt79uxB165dAQABAQFYtWoVWrdujfz8fCxZsgQRERE4fvw4/Pz8Kj3u3LlzMWvWrGrHbgoevFfO3EyEQSGeeCWyOQI97R6xF9VIaSlw8aJ6vUULwIw9gURERESmqEbzXP3yyy+YNm2aVnt8fHyVycujPNw7IghClT0m/v7+8Pf31zwOCwtDWloaPvvsM01y1aVLF3Tp0kWzTUREBNq3b48vv/wSX3zxRaXHjYmJwZQpUzSP8/Pz4eXlpfO51Cenb97vnft9ciT8XFmgwmCUSuDnn9Xr06YBUqlx4yEiIiIig9A5uZo1axZGjhyJffv2ISIiAiKRCAcOHMAff/yBX375pdrHcXZ2hlgsrtBLlZWVVaE361G6dOmCn376qcrnzczM0LFjR03xjcrIZDLIZLJqv6Yp2H7q/s+9sYOlESMhIiIiIjINOo9PeuaZZ/D333/D2dkZGzduxG+//QZnZ2f8888/GDZsWLWPI5VKERoaisTERK32xMTESocdVuXYsWPw8PCo8nlBEJCcnPzIbRoaQRCw/XTlQy+JiIiIiKhmalSKPTQ09JG9RdU1ZcoUjB49Gh06dEBYWBiWLVuG1NRUTJw4EYB6uN6NGzewevVqAOpqgs2aNUNQUBDkcjl++uknrF+/HuvXr9ccc9asWejSpQv8/PyQn5+PL774AsnJyfj666+fOF5TcSwtFzdzOe8XEREREZE+6ZxcJSQkQCwWo2/fvlrtO3bsQGlpKaKjo6t9rJEjRyInJwezZ89Geno6goODkZCQAG9vbwDq4hmpqama7eVyOd577z3cuHEDlpaWCAoKwrZt29C/f3/NNrm5uZgwYQIyMjJgb2+Pdu3aYd++fejUqZOup2qyth6vfuERIjI8lnknIiIyDTonV1OnTsW8efMqtAuCgKlTp+qUXAHApEmTMGnSpEqfW7VqldbjDz74AB988MEjj7d48WIsXrxYpxgaktJSAQknmVwREREREembzvdcXbhwAYGBgRXaAwICcLG83DTVWUeu3UFGfjFsZDUaEUpERERERFXQObmyt7fH5cuXK7RfvHgR1tbWegmKDGfriZsAgF6tXI0cSQMiFgP9+6sXsdjY0RARERGRgejcfTF48GBMnjwZGzZsgK+vLwB1YvXuu+9i8ODBeg+Q9EdVKiDhpLpKYL9gd2xKvmnkiOoOg97zIhYDvOePqNp4DxoREdVXOvdcLVy4ENbW1ggICICPjw98fHzQqlUrODk54bPPPjNEjKQnf1/JQfbdEthbShDW3MnY4RARERERmRSde67s7e1x6NAhJCYm4vjx47C0tESbNm3QtWtXQ8RHerT1hLqQRb8gd0jNdc6rqaZKS4HyqpdNmwJm/NkTERERmaIaVTUQiUSIiopCVFQUAHX5c6rblKpSbD+lHhI4MIQTKtcqpRIor3w5bRoglRo1HCIiIiIyDJ2/Qp8/fz7i4+M1j0eMGAEnJyc0btwYx48f12twpD+HLuXgdqEcjtZSDgkkaiDK7126Om8ArKSsEMqfBxERGZrOydW3334LLy8vAEBiYiISExPx+++/Izo6Gu+//77eAyT9KK8S2C/YHeZiDksjIiIiItI3nb+6S09P1yRXW7duxYgRIxAVFYVmzZqhc+fOeg+QnpxcWYodpzMBAAPbcEggEREREZEh6NyF0ahRI6SlpQEAtm/fjt69ewMABEGASqXSb3SkFwcvZiPvngIutjJ09uGQQCIifeOQQyIiAmrQc/X0009j1KhR8PPzQ05ODqKjowEAycnJaNGihd4DpCe3pWxIYP9gd4jNREaOhohMAeeiIiIiqkjn5Grx4sVo1qwZ0tLSsGDBAtjY2ABQDxecNGmS3gOkJ1OsUCGxfEhgiKeRo3ly/EBHRERERHWVzsmVRCLBe++9V6F98uTJ+oiH9Gzf+VsoKFHC3c4CoU0bGTuchkksBvr0ub9ORERERCaJA8NNXPnEwQPaeMCMQwKNQywGIiKMHQURERERGRhrcpuwe3IVdp1RDwkcwCqBREREREQGxZ4rE7bnXBaK5Co0drBEOy8HY4fTcJWWAunqHkR4eABm/E6DiEwb748looaq2p/ylEqlIeMgAygfEjiwjQdEIg4JNBqlEvjuO/XC64iIiIjIZFU7ufLw8MB7772HM2fOGDIe0pPCEiX+OFs+cXD9rxJIRERERFTXVTu5mjJlCrZs2YLg4GCEhYVh+fLluHv3riFjoyfwx9ksFCtK4e1kheDGdsYOh4iIiIjI5FU7uYqJicG5c+ewZ88eBAQEYPLkyfDw8MDYsWNx8OBBQ8ZINbD1uHriYA4JJCIiIiKqHTrfWR8ZGYmVK1ciIyMDsbGxuHjxIiIjI+Hv748FCxYYIkbSUUGxAnvO3wLAIYFERERERLWlxmXLrK2tMW7cOOzfvx9btmxBdnY2YmJi9Bkb1dCuM5mQK0vR3MUaAe62xg6HiIiIiKhBqHEp9qKiIsTHx2PlypU4ePAgfH198f777+szNqqhrcfLqwR6ckggERERkR5wigGqDp2Tq/3792PlypX49ddfoVKpMHz4cHzyySfo2rWrIeIjHeUVKbDvgnpI4CBOHFw3iMVA9+7314mIiIjIJFU7ufr000+xatUqXLp0CR06dMDChQvx/PPPw86Olejqkh0pGVCoBPi72cLPjUMC64QHkysiIiIiMlnVTq4WL16M//znPxg3bhyCg4MNGRM9gQcnDiYiIiJqSDh0j4yt2snVzZs3IZFIDBkLPaHbhXIcvJgNABgYwiqBdYYgALfUQzXh4gLwPjgiIiIik1TtaoH79+9HYGAg8vPzKzyXl5eHoKAg7N+/X6/BkW62n8qAqlRAkKcdfJytjR0OlVMogLg49aJQGDsaIiIiIjKQavdcxcbG4pVXXqn0Hit7e3u8+uqrWLRoESIjI/UaIFXftpPqiYMHcEggERFVA4dQERHpV7V7ro4fP45+/fpV+XxUVBSOHj2ql6BId7cKSpB0KQcAMLA1hwQSEZmK8gTo6rwBsJLWeAYVIiKqBdVOrjIzMx95z5W5uTluld9XQrVu+6l0lApASBN7NHWyMnY4REREREQNTrWTq8aNG+PkyZNVPn/ixAl4eHA4mrFsOXF/4mAiIiIiIqp91U6u+vfvj48++gjFxcUVnrt37x5mzpyJgQMH6jU4qp7M/GIcvnobAO+3IiIiIiIylmonVx9++CFu376Nli1bYsGCBdi0aRM2b96M+fPnw9/fH7dv38b06dN1DiAuLg4+Pj6wsLBAaGjoIysO7tmzByKRqMJy9uxZre3Wr1+PwMBAyGQyBAYGYsOGDTrHVZ9sO5EOQQBCvRvB08HS2OEQERERETVI1b4z1s3NDYcOHcJrr72GmJgYCIIAABCJROjbty/i4uLg5uam04vHx8dj8uTJiIuLQ0REBL799ltER0cjJSUFTZs2rXK/c+fOaVUtdHFx0awnJSVh5MiRmDNnDoYNG4YNGzZgxIgROHDgADp37qxTfPXF1hPqKoGcOLiOEouB8PD760RERERkknQqO+Tt7Y2EhATcuXMHFy9ehCAI8PPzQ6NGjWr04osWLcK4ceMwfvx4AOpy7zt27MDSpUsxd+7cKvdzdXWFg4NDpc/FxsaiT58+iImJAQDExMRg7969iI2Nxdq1a2sUZ112I/ce/k3NhUgE9G/N5KpOEouBqChjR0FEREREBlbtYYEPatSoETp27IhOnTrVOLGSy+U4evQooh760BkVFYVDhw49ct927drBw8MDvXr1wu7du7WeS0pKqnDMvn37PvKYJSUlyM/P11rqi4SyQhYdmznCzc7CyNEQERERETVcNUqu9CE7OxsqlarCUEI3NzdkZGRUuo+HhweWLVuG9evX47fffoO/vz969eqFffv2abbJyMjQ6ZgAMHfuXNjb22sWLy+vJziz2lU+JHAQhwTWXYIA5Oaql7LhtERERERkeow+G6FIJNJ6LAhChbZy/v7+8Pf31zwOCwtDWloaPvvsM3Tt2rVGxwTUQwenTJmieZyfn18vEqzUnCIcv54HMxHQL5jJVZ2lUACxser1adMAqdSo4RARERGRYRit58rZ2RlisbhCj1JWVpZOhTG6dOmCCxcuaB67u7vrfEyZTAY7OzutpT7YelLdaxXm6wQXW5mRoyEiIiIiatiMllxJpVKEhoYiMTFRqz0xMRHh5ZXVquHYsWNakxeHhYVVOObOnTt1OmZ9sfV4zScOtpKa4+q8Abg6bwCspEbvwCQiIiIiqveM+ql6ypQpGD16NDp06ICwsDAsW7YMqampmDhxIgD1cL0bN25g9erVANSVAJs1a4agoCDI5XL89NNPWL9+PdavX6855ttvv42uXbti/vz5GDJkCDZt2oRdu3bhwIEDRjlHQ7l86y5S0vMhNhOhb5C7scMhIiKqFeVfDhIR1UVGTa5GjhyJnJwczJ49G+np6QgODkZCQgK8vb0BAOnp6UhNTdVsL5fL8d577+HGjRuwtLREUFAQtm3bhv79+2u2CQ8Px7p16/Dhhx9ixowZ8PX1RXx8vMnNcbWtrEpgRAtnOFrzHh4iIiIiImMz+niwSZMmYdKkSZU+t2rVKq3HH3zwAT744IPHHnP48OEYPny4PsKrs7aeKB8SyEIWRERERER1gdHuuaKau5BZgHOZBZCIRegbyCGBRERERER1gdF7rkh3W8p6rbr6ucDeSmLkaOixzMyAjh3vrxMRERGRSWJyVc8IgqCZOHhgCIcE1gvm5sAA3nxNREREZOqYXNUzZ9ILcPlWIaTmZujdqvrzgRERERHVFaz6SKaKyVU9s61s4uDuLV1ga8EhgfWCIABFRep1KytAJDJuPERERERkELwBpB5RDwksqxIYovvEwWQkCgWwcKF6USiMHQ0RERERGQiTq3okJT0f13KKYCExQ68AV2OHQ0RERERED+CwwHrk95MZAIBeAW6wlvGtIyIiIqJH4/1ttYuf0OuR7afVyRUnDiYiItI/fggloifFYYH1yM3cYlhLxejBIYFERERERHUOk6t6pnegGywkYmOHQURERERED+GwwHpmQGsOCSQiIiIyJRySajqYXNUjNjJzdPN3MXYYpCszM6Bt2/vrRERERGSSmFzVI71auUJmziGB9Y65OTB0qLGjICIiIiID49fodZyqVNCs9wt2N2IkRERERET0KOy5quOOXL2tWQ9r7mTESKjGBAFQKNTrEgkgEhk3HiIiIiIyCPZc1XG/n8rQrEvN+XbVSwoF8Omn6qU8ySIiIiIik8NP63Vcf1YHJCIiIiKqF5hc1XGdfByNHQIREREREVUD77kiIiIiIiKdcG6uyrHnioiIiIiISA/Yc0VEREREFbBngkh37LkiIiIiIiLSA/ZcERmamRkQGHh/nYiIGhxD9QKxd4mobmFyRWRo5ubAiBHGjoKIiIiIDIxfoxMREREREekBkysiIiIiIiI94LBAIkOTy4FPP1WvT5sGSKXGjYeIiIiIDII9V0RERERERHrA5IqIiIiIiEgPOCyQiIiIiIjqjPo8xQB7roiIiIiIiPSAyRUREREREZEeGD25iouLg4+PDywsLBAaGor9+/dXa7+DBw/C3Nwcbdu21WpftWoVRCJRhaW4uNgA0RMREREREakZNbmKj4/H5MmTMX36dBw7dgyRkZGIjo5GamrqI/fLy8vDiy++iF69elX6vJ2dHdLT07UWCwsLQ5wC0eOZmQF+furFzOjfZxARERGRgRj1k96iRYswbtw4jB8/Hq1atUJsbCy8vLywdOnSR+736quvYtSoUQgLC6v0eZFIBHd3d62FyGjMzYEXXlAv5qwhQ0RERGSqjJZcyeVyHD16FFFRUVrtUVFROHToUJX7rVy5EpcuXcLMmTOr3Obu3bvw9vZGkyZNMHDgQBw7duyRsZSUlCA/P19rISIiIiIi0oXRkqvs7GyoVCq4ublptbu5uSEjI6PSfS5cuICpU6dizZo1MK+iByAgIACrVq3C5s2bsXbtWlhYWCAiIgIXLlyoMpa5c+fC3t5es3h5edX8xIiIiIiIqEEy+g0gIpFI67EgCBXaAEClUmHUqFGYNWsWWrZsWeXxunTpgv/85z8ICQlBZGQkfvnlF7Rs2RJffvlllfvExMQgLy9Ps6SlpdX8hIgeJpcD//2vepHLjR0NERERERmI0W4AcXZ2hlgsrtBLlZWVVaE3CwAKCgpw5MgRHDt2DG+88QYAoLS0FIIgwNzcHDt37kTPnj0r7GdmZoaOHTs+sudKJpNBJpM94RkRPYJCYewIiIiIiMjAjNZzJZVKERoaisTERK32xMREhIeHV9jezs4OJ0+eRHJysmaZOHEi/P39kZycjM6dO1f6OoIgIDk5GR4eHgY5DyIiIiIiIsCIPVcAMGXKFIwePRodOnRAWFgYli1bhtTUVEycOBGAerjejRs3sHr1apiZmSE4OFhrf1dXV1hYWGi1z5o1C126dIGfnx/y8/PxxRdfIDk5GV9//XWtnhsRERERETUsRk2uRo4ciZycHMyePRvp6ekIDg5GQkICvL29AQDp6emPnfPqYbm5uZgwYQIyMjJgb2+Pdu3aYd++fejUqZMhToGIiIiIiAgAIBIEQTB2EHVNfn4+7O3tkZeXBzs7O6PGUiRXIvCjHQCAlNl9YSWt+/Mk1ceYDUouBz79VL0+bRoglRo3HiIiIiKqNl1yA6NXCyQiIiIiIjIFDbxLgagWiERAs2b314mIiIjIJDG5IjI0iQQYM8bYURARERGRgXFYIBERERERkR4wuSIiIiIiItIDDgskMjS5HIiNVa9PnsxqgUREREQmiskVUW0oKjJ2BERERERkYBwWSEREREREpAdMroiIiIiIiPSAyRUREREREZEeMLkiIiIiIiLSAxa0IL2zkprj6rwBxg6DiIiIiKhWMbkiMjSRCPD0vL9ORERERCaJyRWRoUkkwIQJxo6CiIiIiAyM91wRERERERHpAZMrIiIiIiIiPeCwQCJDUyiAr79Wr7/+unqYIBERERGZHCZXRIYmCEBu7v11IiIiIjJJHBZIRERERESkB0yuiIiIiIiI9IDJFRERERERkR4wuSIiIiIiItIDJldERERERER6wGqBRIYmEgEuLvfXiYiIiMgkMbkiMjSJRD2/FRERERGZNA4LJCIiIiIi0gMmV0RERERERHrAYYFEhqZQAMuWqdcnTFAPEyQiIiIik8PkisjQBAG4dev+OhERERGZJA4LJCIiIiIi0gMmV0RERERERHrA5IqIiIiIiEgPmFwRERERERHpgdGTq7i4OPj4+MDCwgKhoaHYv39/tfY7ePAgzM3N0bZt2wrPrV+/HoGBgZDJZAgMDMSGDRv0HDUREREREZE2oyZX8fHxmDx5MqZPn45jx44hMjIS0dHRSE1NfeR+eXl5ePHFF9GrV68KzyUlJWHkyJEYPXo0jh8/jtGjR2PEiBH4+++/DXUaRI8mEgEODupFJDJ2NERERERkICJBMF5t6M6dO6N9+/ZYunSppq1Vq1YYOnQo5s6dW+V+zz33HPz8/CAWi7Fx40YkJydrnhs5ciTy8/Px+++/a9r69euHRo0aYe3atdWKKz8/H/b29sjLy4OdnZ3uJ6ZHRXIlAj/aAQBImd0XVlJWzyciIiIiqi265AZG67mSy+U4evQooqKitNqjoqJw6NChKvdbuXIlLl26hJkzZ1b6fFJSUoVj9u3b95HHLCkpQX5+vtZCRERERESkC6MlV9nZ2VCpVHBzc9Nqd3NzQ0ZGRqX7XLhwAVOnTsWaNWtgbl55D05GRoZOxwSAuXPnwt7eXrN4eXnpeDZERERERNTQGb2gheihe1AEQajQBgAqlQqjRo3CrFmz0LJlS70cs1xMTAzy8vI0S1pamg5nQPQYCgWwbJl6USiMHQ0RERERGYjRbuBxdnaGWCyu0KOUlZVVoecJAAoKCnDkyBEcO3YMb7zxBgCgtLQUgiDA3NwcO3fuRM+ePeHu7l7tY5aTyWSQyWR6OCuiSggCcPPm/XUiIiIiMklG67mSSqUIDQ1FYmKiVntiYiLCw8MrbG9nZ4eTJ08iOTlZs0ycOBH+/v5ITk5G586dAQBhYWEVjrlz585Kj0lERERERKQvRi09N2XKFIwePRodOnRAWFgYli1bhtTUVEycOBGAerjejRs3sHr1apiZmSE4OFhrf1dXV1hYWGi1v/322+jatSvmz5+PIUOGYNOmTdi1axcOHDhQq+dGREREREQNi1GTq5EjRyInJwezZ89Geno6goODkZCQAG9vbwBAenr6Y+e8elh4eDjWrVuHDz/8EDNmzICvry/i4+M1PVtERERERESGYNR5ruoqznNFeiWXA59+ql6fNg2QSo0bDxERERFVW72Y54qIiIiIiMiUsBuEqDZYWRk7AiIiIiIyMCZXRIYmlQIffGDsKIiIiIjIwDgskIiIiIiISA+YXBEREREREekBhwUSGZpCAaxZo15/4QVAIjFuPERERERkEEyu6jgrqTmuzhtg7DDoSQgCcPXq/XUiIiIiMkkcFkhERERERKQHTK6IiIiIiIj0gMkVERERERGRHjC5IiIiIiIi0gMmV0RERERERHrAaoFEtYHl14mIiIhMHpMrIkOTSoHp040dBREREREZGIcFEhERERER6QGTKyIiIiIiIj3gsEAiQ1Mqgfh49frIkYA5LzsiIiIiU8RPeUSGVloKXLhwf52IiIiITBKHBRIREREREekBkysiIiIiIiI9YHJFRERERESkB0yuiIiIiIiI9IDJFRERERERkR6wWmAlBEEAAOTn5xs5EjIJcjlQUqJez88HpFLjxkNERERE1VaeE5TnCI8iEqqzVQNz/fp1eHl5GTsMIiIiIiKqI9LS0tCkSZNHbsPkqhKlpaW4efMmbG1tIRKJqrVPfn4+vLy8kJaWBjs7OwNHSIbA99A08H00DXwf6z++h6aB76Np4Pv4ZARBQEFBATw9PWFm9ui7qjgssBJmZmaPzUqrYmdnx1/aeo7voWng+2ga+D7Wf3wPTQPfR9PA97Hm7O3tq7UdC1oQERERERHpAZMrIiIiIiIiPWBypScymQwzZ86ETCYzdihUQ3wPTQPfR9PA97H+43toGvg+mga+j7WHBS2IiIiIiIj0gD1XREREREREesDkioiIiIiISA+YXBEREREREekBkysiIiIiIiI9YHKlB3FxcfDx8YGFhQVCQ0Oxf/9+Y4dEOvj4448hEom0Fnd3d2OHRY+xb98+DBo0CJ6enhCJRNi4caPW84Ig4OOPP4anpycsLS3RvXt3nD592jjBUqUe9x6OGTOmwrXZpUsX4wRLlZo7dy46duwIW1tbuLq6YujQoTh37pzWNrwW677qvI+8Huu+pUuXok2bNpqJgsPCwvD7779rnue1WDuYXD2h+Ph4TJ48GdOnT8exY8cQGRmJ6OhopKamGjs00kFQUBDS09M1y8mTJ40dEj1GYWEhQkJC8NVXX1X6/IIFC7Bo0SJ89dVXOHz4MNzd3dGnTx8UFBTUcqRUlce9hwDQr18/rWszISGhFiOkx9m7dy9ef/11/PXXX0hMTIRSqURUVBQKCws12/BarPuq8z4CvB7ruiZNmmDevHk4cuQIjhw5gp49e2LIkCGaBIrXYi0R6Il06tRJmDhxolZbQECAMHXqVCNFRLqaOXOmEBISYuww6AkAEDZs2KB5XFpaKri7uwvz5s3TtBUXFwv29vbCN998Y4QI6XEefg8FQRBeeuklYciQIUaJh2omKytLACDs3btXEARei/XVw++jIPB6rK8aNWokfP/997wWaxF7rp6AXC7H0aNHERUVpdUeFRWFQ4cOGSkqqokLFy7A09MTPj4+eO6553D58mVjh0RP4MqVK8jIyNC6NmUyGbp168Zrs57Zs2cPXF1d0bJlS7zyyivIysoydkj0CHl5eQAAR0dHALwW66uH38dyvB7rD5VKhXXr1qGwsBBhYWG8FmsRk6snkJ2dDZVKBTc3N612Nzc3ZGRkGCkq0lXnzp2xevVq7NixA9999x0yMjIQHh6OnJwcY4dGNVR+/fHarN+io6OxZs0a/Pnnn/j8889x+PBh9OzZEyUlJcYOjSohCAKmTJmCp556CsHBwQB4LdZHlb2PAK/H+uLkyZOwsbGBTCbDxIkTsWHDBgQGBvJarEXmxg7AFIhEIq3HgiBUaKO6Kzo6WrPeunVrhIWFwdfXFz/88AOmTJlixMjoSfHarN9GjhypWQ8ODkaHDh3g7e2Nbdu24emnnzZiZFSZN954AydOnMCBAwcqPMdrsf6o6n3k9Vg/+Pv7Izk5Gbm5uVi/fj1eeukl7N27V/M8r0XDY8/VE3B2doZYLK6Q8WdlZVX4ZoDqD2tra7Ru3RoXLlwwdihUQ+XVHnltmhYPDw94e3vz2qyD3nzzTWzevBm7d+9GkyZNNO28FuuXqt7HyvB6rJukUilatGiBDh06YO7cuQgJCcGSJUt4LdYiJldPQCqVIjQ0FImJiVrtiYmJCA8PN1JU9KRKSkpw5swZeHh4GDsUqiEfHx+4u7trXZtyuRx79+7ltVmP5eTkIC0tjddmHSIIAt544w389ttv+PPPP+Hj46P1PK/F+uFx72NleD3WD4IgoKSkhNdiLeKwwCc0ZcoUjB49Gh06dEBYWBiWLVuG1NRUTJw40dihUTW99957GDRoEJo2bYqsrCx88sknyM/Px0svvWTs0OgR7t69i4sXL2oeX7lyBcnJyXB0dETTpk0xefJkfPrpp/Dz84Ofnx8+/fRTWFlZYdSoUUaMmh70qPfQ0dERH3/8MZ555hl4eHjg6tWrmDZtGpydnTFs2DAjRk0Pev311/Hzzz9j06ZNsLW11Xwrbm9vD0tLS4hEIl6L9cDj3se7d+/yeqwHpk2bhujoaHh5eaGgoADr1q3Dnj17sH37dl6LtclodQpNyNdffy14e3sLUqlUaN++vVbpUqr7Ro4cKXh4eAgSiUTw9PQUnn76aeH06dPGDoseY/fu3QKACstLL70kCIK6BPTMmTMFd3d3QSaTCV27dhVOnjxp3KBJy6Pew6KiIiEqKkpwcXERJBKJ0LRpU+Gll14SUlNTjR02PaCy9w+AsHLlSs02vBbrvse9j7we64eXX35Z83nUxcVF6NWrl7Bz507N87wWa4dIEAShNpM5IiIiIiIiU8R7roiIiIiIiPSAyRUREREREZEeMLkiIiIiIiLSAyZXREREREREesDkioiIiIiISA+YXBEREREREekBkysiIiIiIiI9YHJFRERERESkB0yuiIiIHtK9e3dMnjzZ2GEQEVE9w+SKiIiIiIhID5hcERERERER6QGTKyIiosfYvn077O3tsXr1amOHQkREdRiTKyIiokdYt24dRowYgdWrV+PFF180djhERFSHMbkiIiKqQlxcHCZOnIhNmzZhyJAhxg6HiIjqOHNjB0BERFQXrV+/HpmZmThw4AA6depk7HCIiKgeYM8VERFRJdq2bQsXFxesXLkSgiAYOxwiIqoHmFwRERFVwtfXF7t378amTZvw5ptvGjscIiKqBzgskIiIqAotW7bE7t270b17d5ibmyM2NtbYIRERUR3G5IqIiOgR/P398eeff6J79+4Qi8X4/PPPjR0SERHVUSKBA8mJiIiIiIieGO+5IiIiIiIi0gMmV0RERERERHrA5IqIiIiIiEgPmFwRERERERHpAZMrIiIiIiIiPWByRUREREREpAdMroiIiIiIiPSAyRUREREREZEeMLkiIiIiIiLSAyZXREREREREesDkioiIiIiISA/+Hwqun4q350mZAAAAAElFTkSuQmCC\n",
"text/plain": [
"