{ "cells": [ { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "

Table of Contents

\n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# code for loading the format for the notebook\n", "import os\n", "\n", "# path : store the current path to convert back to it later\n", "path = os.getcwd()\n", "os.chdir(os.path.join('..', '..', 'notebook_format'))\n", "from formats import load_style\n", "load_style(plot_style = False)" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ethen 2017-12-02 15:10:26 \n", "\n", "CPython 3.5.2\n", "IPython 6.2.1\n", "\n", "numpy 1.13.3\n", "pandas 0.20.3\n", "matplotlib 2.1.0\n", "sklearn 0.19.1\n" ] } ], "source": [ "os.chdir(path)\n", "\n", "# 1. magic for inline plot\n", "# 2. magic to print version\n", "# 3. magic so that the notebook will reload external python modules\n", "# 4. magic to enable retina (high resolution) plots\n", "# https://gist.github.com/minrk/3301035\n", "%matplotlib inline\n", "%load_ext watermark\n", "%load_ext autoreload\n", "%autoreload 2\n", "%config InlineBackend.figure_format = 'retina'\n", "\n", "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "\n", "%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Text Machine Learning with scikit-learn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 1: Model building in scikit-learn (refresher)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're already familiar with model-building in different packages, here's a quick refresher on how to train a simple classification model with scikit-learn." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# load the iris dataset as an example\n", "from sklearn.datasets import load_iris\n", "iris = load_iris()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# store the feature matrix (X) and response vector (y)\n", "# by convention X is capitialized simply because it's a two-dimension matrix\n", "X = iris.data\n", "y = iris.target" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**\"Features\"** are also known as predictors, inputs, or attributes. The **\"response\"** is also known as the target, label, or output." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(150, 4)\n", "(150,)\n" ] } ], "source": [ "# check the shapes of X and y\n", "print(X.shape)\n", "print(y.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**\"Observations\"** are also known as samples, instances, or records." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
05.13.51.40.2
14.93.01.40.2
24.73.21.30.2
34.63.11.50.2
45.03.61.40.2
\n", "
" ], "text/plain": [ " sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)\n", "0 5.1 3.5 1.4 0.2\n", "1 4.9 3.0 1.4 0.2\n", "2 4.7 3.2 1.3 0.2\n", "3 4.6 3.1 1.5 0.2\n", "4 5.0 3.6 1.4 0.2" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the first 5 rows of the feature matrix (including the feature names)\n", "pd.DataFrame(X, columns = iris.feature_names).head()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n", " 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n", " 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2\n", " 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n", " 2 2]\n" ] } ], "source": [ "# examine the response vector\n", "print(y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to **build a model**, the features must be **numeric**, and every observation must have the **same features in the same order**." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',\n", " metric_params=None, n_jobs=1, n_neighbors=5, p=2,\n", " weights='uniform')" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# import the class\n", "from sklearn.neighbors import KNeighborsClassifier\n", "\n", "# instantiate the model (with the default parameters)\n", "knn = KNeighborsClassifier()\n", "\n", "# fit the model with data (occurs in-place)\n", "# learning the relationship between X and y\n", "knn.fit(X, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to **make a prediction**, the new observation must have the **same features in the same order as the training observations**, both in number and meaning." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([1])" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# predict the response for a new observation\n", "knn.predict([[3, 5, 4, 2]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 2: Representing text as numerical data" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# example text for model training (SMS messages)\n", "simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the [scikit-learn documentation](http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction):\n", "\n", "> Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect **numerical feature vectors with a fixed size** rather than the **raw text documents with variable length**.\n", "\n", "Thus, when working with text, we will use [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) to \"convert text into a matrix of token counts\":" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# import and instantiate CountVectorizer (with the default parameters)\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "vect = CountVectorizer()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<3x6 sparse matrix of type ''\n", "\twith 9 stored elements in Compressed Sparse Row format>" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# learn the 'vocabulary' of the training data and transform \n", "# it into a 'document-term matrix'. Unlike models, for any \n", "# feature extraction or data-preprocessing toolkit, we use transform \n", "# instead of fit, since it's taking some data and \"transforming\" it,\n", "# fit_transform is just a shortcut for calling .fit and .transform separately\n", "simple_train_dtm = vect.fit_transform(simple_train)\n", "simple_train_dtm" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['cab', 'call', 'me', 'please', 'tonight', 'you']" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the fitted vocabulary\n", "vect.get_feature_names()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice, with the default parameters:\n", "\n", "- Single character like `a` will be removed.\n", "- Punctuations have been removed.\n", "- Words have been converted to lower cases.\n", "- The vocabulary has no duplicated." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 1, 0, 0, 1, 1],\n", " [1, 1, 1, 0, 0, 0],\n", " [0, 1, 1, 2, 0, 0]], dtype=int64)" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# convert sparse matrix to a dense matrix \n", "simple_train_dtm.toarray()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
cabcallmepleasetonightyou
0010011
1111000
2011200
\n", "
" ], "text/plain": [ " cab call me please tonight you\n", "0 0 1 0 0 1 1\n", "1 1 1 1 0 0 0\n", "2 0 1 1 2 0 0" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the vocabulary and document-term matrix together.\n", "# 3 * 6 matrix, because there're three documents and six tokens that were learned\n", "# each number represents the counts for each token in each document\n", "pd.DataFrame(simple_train_dtm.toarray(), columns = vect.get_feature_names())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the [scikit-learn documentation](http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction):\n", "\n", "> In this scheme, features and samples are defined as follows:\n", "\n", "> - Each individual token occurrence frequency (normalized or not) is treated as a **feature**.\n", "> - The vector of all the token frequencies for a given document is considered a multivariate **sample**.\n", "\n", "> A **corpus of documents** can thus be represented by a matrix with **one row per document** and **one column per token** (e.g. word) occurring in the corpus.\n", "\n", "> We call **vectorization** the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the **Bag of Words** or \"Bag of n-grams\" representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.\n", "\n", "### Side Note On Sparse Matrices" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " (0, 4)\t1\n", " (0, 5)\t1\n", " (0, 1)\t1\n", " (1, 0)\t1\n", " (1, 2)\t1\n", " (1, 1)\t1\n", " (2, 3)\t2\n", " (2, 2)\t1\n", " (2, 1)\t1\n" ] } ], "source": [ "# check the type of the document-term matrix\n", "print(type(simple_train_dtm))\n", "\n", "# examine the sparse matrix contents\n", "# represented coordinates, and the values at that coordinates\n", "print(simple_train_dtm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the [scikit-learn documentation](http://scikit-learn.org/stable/modules/feature_extraction.html#text-feature-extraction):\n", "\n", "> As most documents will typically use a very small subset of the words used in the corpus, the resulting matrix will have **many feature values that are zeros** (typically more than 99% of them).\n", "\n", "> For instance, a collection of 10,000 short text documents (such as emails) will use a vocabulary with a size in the order of 100,000 unique words in total while each document will use 100 to 1000 unique words individually.\n", "\n", "> In order to be able to **store such a matrix in memory** but also to **speed up operations**, implementations will typically use a **sparse representation** such as the implementations available in the `scipy.sparse` package." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# example text for model testing\n", "simple_test = [\"please don't call me\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to **make a prediction**, the new observation must have the **same features as the training observations**, both in number and meaning." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[0, 1, 1, 1, 0, 0]])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# transform testing data into a document-term matrix,\n", "# using the existing vocabulary from the training data\n", "simple_test_dtm = vect.transform(simple_test)\n", "simple_test_dtm.toarray()" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
cabcallmepleasetonightyou
0011100
\n", "
" ], "text/plain": [ " cab call me please tonight you\n", "0 0 1 1 1 0 0" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the vocabulary and document-term matrix together\n", "pd.DataFrame( simple_test_dtm.toarray(), columns = vect.get_feature_names() )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Summary:**\n", "\n", "- `vect.fit(train)` **learns the vocabulary** of the training data.\n", "- `vect.transform(train)` uses the **fitted vocabulary** to build a document-term matrix from the training data. Or just `vect.fit_transform(train)` to combine the two steps into one.\n", "- `vect.transform(test)` uses the **fitted vocabulary** to build a document-term matrix from the testing data. Note that it **ignores tokens** it hasn't seen before, this is reasonable due to the fact that the word does not exist in the training data, thus the model doesn't know anything about the relationship between the word and the output." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 3: Reading a text-based dataset into pandas and vectorizing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a text data that has been labeled as spam and ham (non-spam), our goal is to see if we can correctly classify the labels using text messages." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dimension: (5572, 2)\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
labelmessage
0hamGo until jurong point, crazy.. Available only ...
1hamOk lar... Joking wif u oni...
2spamFree entry in 2 a wkly comp to win FA Cup fina...
3hamU dun say so early hor... U c already then say...
4hamNah I don't think he goes to usf, he lives aro...
5spamFreeMsg Hey there darling it's been 3 week's n...
6hamEven my brother is not like to speak with me. ...
7hamAs per your request 'Melle Melle (Oru Minnamin...
8spamWINNER!! As a valued network customer you have...
9spamHad your mobile 11 months or more? U R entitle...
\n", "
" ], "text/plain": [ " label message\n", "0 ham Go until jurong point, crazy.. Available only ...\n", "1 ham Ok lar... Joking wif u oni...\n", "2 spam Free entry in 2 a wkly comp to win FA Cup fina...\n", "3 ham U dun say so early hor... U c already then say...\n", "4 ham Nah I don't think he goes to usf, he lives aro...\n", "5 spam FreeMsg Hey there darling it's been 3 week's n...\n", "6 ham Even my brother is not like to speak with me. ...\n", "7 ham As per your request 'Melle Melle (Oru Minnamin...\n", "8 spam WINNER!! As a valued network customer you have...\n", "9 spam Had your mobile 11 months or more? U R entitle..." ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the data's shape and first 10 rows\n", "sms = pd.read_table('sms.tsv', header = None, names = ['label', 'message'])\n", "print('dimension:', sms.shape)\n", "sms.head(10)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# alternative: read file into pandas from a URL\n", "# url = 'https://raw.githubusercontent.com/justmarkham/pycon-2016-tutorial/master/data/sms.tsv'\n", "# sms = pd.read_table( url, header = None, names = ['label', 'message'] )" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ham 4825\n", "spam 747\n", "Name: label, dtype: int64" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the class distribution\n", "sms['label'].value_counts()" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
labelmessagelabel_num
0hamGo until jurong point, crazy.. Available only ...0
1hamOk lar... Joking wif u oni...0
2spamFree entry in 2 a wkly comp to win FA Cup fina...1
3hamU dun say so early hor... U c already then say...0
4hamNah I don't think he goes to usf, he lives aro...0
5spamFreeMsg Hey there darling it's been 3 week's n...1
6hamEven my brother is not like to speak with me. ...0
7hamAs per your request 'Melle Melle (Oru Minnamin...0
8spamWINNER!! As a valued network customer you have...1
9spamHad your mobile 11 months or more? U R entitle...1
\n", "
" ], "text/plain": [ " label message label_num\n", "0 ham Go until jurong point, crazy.. Available only ... 0\n", "1 ham Ok lar... Joking wif u oni... 0\n", "2 spam Free entry in 2 a wkly comp to win FA Cup fina... 1\n", "3 ham U dun say so early hor... U c already then say... 0\n", "4 ham Nah I don't think he goes to usf, he lives aro... 0\n", "5 spam FreeMsg Hey there darling it's been 3 week's n... 1\n", "6 ham Even my brother is not like to speak with me. ... 0\n", "7 ham As per your request 'Melle Melle (Oru Minnamin... 0\n", "8 spam WINNER!! As a valued network customer you have... 1\n", "9 spam Had your mobile 11 months or more? U R entitle... 1" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# convert label to a numerical variable\n", "sms['label_num'] = sms['label'].map({'ham': 0, 'spam': 1})\n", "sms.head(10)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(5572,)\n", "(5572,)\n" ] } ], "source": [ "# how to define X and y (from the SMS data) for use with COUNTVECTORIZER\n", "# COUNTVECTORIZER accepts one-dimension data\n", "X = sms['message']\n", "y = sms['label_num']\n", "print(X.shape)\n", "print(y.shape)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(4179,)\n", "(1393,)\n" ] } ], "source": [ "# split X and y into training and testing sets\n", "from sklearn.model_selection import train_test_split\n", "\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1)\n", "\n", "print(X_train.shape)\n", "print(X_test.shape)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# instantiate the vectorizer\n", "vect = CountVectorizer()\n", "\n", "# learn training data vocabulary, then use it to create a document-term matrix\n", "# equivalently: combine fit and transform into a single step\n", "X_train_dtm = vect.fit_transform(X_train)\n", "\n", "# transform testing data (using fitted vocabulary) into a document-term matrix\n", "X_test_dtm = vect.transform(X_test)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 4: Building and evaluating a model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, the algorithms are all treated as black box, its inner workings are explained in other notebooks.\n", "\n", "We will use [multinomial Naive Bayes](http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html), Naive Bayes class algorithms are extremely fast and it's usually the go-to method for doing classification on text data.:\n", "\n", "> The multinomial Naive Bayes classifier is suitable for classification with **discrete features** (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work." ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 2 µs, sys: 1e+03 ns, total: 3 µs\n", "Wall time: 5.01 µs\n" ] }, { "data": { "text/plain": [ "MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True)" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# import and instantiate a Multinomial Naive Bayes model\n", "from sklearn.naive_bayes import MultinomialNB\n", "nb = MultinomialNB()\n", "\n", "# train the model using X_train_dtm (timing it with an IPython \"magic command\")\n", "%time \n", "nb.fit(X_train_dtm, y_train)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.988513998564\n" ] }, { "data": { "text/plain": [ "0.98664310005369615" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# make class predictions for X_test_dtm\n", "y_pred_class = nb.predict(X_test_dtm)\n", "\n", "# calculate accuracy of class predictions\n", "from sklearn import metrics\n", "print(metrics.accuracy_score(y_test, y_pred_class))\n", "\n", "# print the confusion matrix\n", "# metrics.confusion_matrix(y_test, y_pred_class)\n", "\n", "# calculate predicted probabilities for X_test_dtm\n", "# extract the probability that each observation belongs to class 1,\n", "# though the downside of naive bayes is that the probabilities are poorly calibrated\n", "y_pred_prob = nb.predict_proba(X_test_dtm)[:, 1]\n", "\n", "# calculate AUC\n", "metrics.roc_auc_score(y_test, y_pred_prob)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 5: Building and evaluating another model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will compare multinomial Naive Bayes with [logistic regression](http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression):\n", "\n", "> Logistic regression, despite its name, is a **linear model for classification** rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 2 µs, sys: 1 µs, total: 3 µs\n", "Wall time: 5.01 µs\n" ] }, { "data": { "text/plain": [ "LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,\n", " intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,\n", " penalty='l2', random_state=None, solver='liblinear', tol=0.0001,\n", " verbose=0, warm_start=False)" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# import and instantiate a logistic regression model\n", "from sklearn.linear_model import LogisticRegression\n", "logreg = LogisticRegression()\n", "\n", "# train the model using X_train_dtm\n", "# it accepts both sparse and dense arrays\n", "%time \n", "logreg.fit(X_train_dtm, y_train)" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.987796123475\n" ] }, { "data": { "text/plain": [ "0.99368176123143015" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# make class predictions for X_test_dtm\n", "y_pred_class = logreg.predict(X_test_dtm)\n", "\n", "# calculate accuracy\n", "print(metrics.accuracy_score(y_test, y_pred_class))\n", "\n", "# calculate predicted probabilities for X_test_dtm (well calibrated)\n", "# it's a good model if you care about predicting the classified probability\n", "y_pred_prob = logreg.predict_proba(X_test_dtm)[:, 1]\n", "\n", "# calculate auc\n", "metrics.roc_auc_score(y_test, y_pred_prob)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 6: Examining a model for further insight" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After building the model, it's a good practice to look at examples of where your model got it wrong, and think about what features you can add to \"maybe\" improve the performance." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "false positives\n", "2340 Cheers for the message Zogtorius. I’ve been st...\n", "Name: message, dtype: object\n", "\n", "false negatives\n", "1777 Call FREEPHONE 0800 542 0578 now!\n", "763 Urgent Ur £500 guaranteed award is still uncla...\n", "3132 LookAtMe!: Thanks for your purchase of a video...\n", "1045 We know someone who you know that fancies you....\n", "684 Hi I'm sue. I am 20 years old and work as a la...\n", "4073 Loans for any purpose even if you have Bad Cre...\n", "1875 Would you like to see my XXX pics they are so ...\n", "4298 thesmszone.com lets you send free anonymous an...\n", "4394 RECPT 1/3. You have ordered a Ringtone. Your o...\n", "4949 Hi this is Amy, we will be sending you a free ...\n", "761 Romantic Paris. 2 nights, 2 flights from £79 B...\n", "3991 (Bank of Granite issues Strong-Buy) EXPLOSIVE ...\n", "2821 INTERFLORA - “It's not too late to order Inter...\n", "2863 Adult 18 Content Your video will be with you s...\n", "2247 Hi ya babe x u 4goten bout me?' scammers getti...\n", "4514 Money i have won wining number 946 wot do i do...\n", "Name: message, dtype: object\n" ] }, { "data": { "text/plain": [ "\"LookAtMe!: Thanks for your purchase of a video clip from LookAtMe!, you've been charged 35p. Think you can do better? Why not send a video in a MMSto 32323.\"" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# print message text for the false positives (ham incorrectly classified as spam)\n", "print('false positives')\n", "print(X_test[(y_pred_class == 1) & (y_test == 0)])\n", "# equivalent way to performing the operation\n", "# print(X_test[y_pred_class > y_test])\n", "\n", "\n", "# print message text for the false negatives (spam incorrectly classified as ham)\n", "print('\\nfalse negatives')\n", "print(X_test[y_pred_class < y_test])\n", "\n", "# example false negative\n", "X_test[3132]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After looking at these incorrectly classified messages, you might think does the message's length have something to do with it being a spam/ham. \n", "\n", "Next, we will examine the our trained Naive Bayes model to calculate the approximate **\"spamminess\" of each token** to see which word appears more often in spam messages." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 0. 0. 0. ..., 1. 1. 1.]\n", " [ 5. 23. 2. ..., 0. 0. 0.]]\n" ] } ], "source": [ "# Naive Bayes counts the number of times each token appears in each class, where\n", "# rows represent classes, columns represent tokens\n", "# and the trailing _ is just the convention that's used to denote attributes that\n", "# are learned during the model fitting process\n", "print(nb.feature_count_)\n", "\n", "# extract the number of times each token appears across all HAM and SPAM messages\n", "ham_token_count = nb.feature_count_[0, :]\n", "spam_token_count = nb.feature_count_[1, :]" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "7456\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
hamspam
token
000.05.0
0000.023.0
0087040504060.02.0
01210.01.0
012235852360.01.0
\n", "
" ], "text/plain": [ " ham spam\n", "token \n", "00 0.0 5.0\n", "000 0.0 23.0\n", "008704050406 0.0 2.0\n", "0121 0.0 1.0\n", "01223585236 0.0 1.0" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# store the vocabulary of X_train\n", "X_train_tokens = vect.get_feature_names()\n", "print(len(X_train_tokens))\n", "\n", "# examine the first or last 50 tokens if you wish\n", "# print(X_train_tokens[0:50])\n", "# print(X_train_tokens[-50:])\n", "\n", "# create a DataFrame of tokens with their separate ham and spam counts\n", "tokens = pd.DataFrame({\n", " 'token': X_train_tokens, \n", " 'ham': ham_token_count, \n", " 'spam': spam_token_count\n", "}).set_index('token')\n", "tokens.head()" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
hamspam
token
very64.02.0
nasty1.01.0
villa0.01.0
beloved1.00.0
textoperator0.02.0
\n", "
" ], "text/plain": [ " ham spam\n", "token \n", "very 64.0 2.0\n", "nasty 1.0 1.0\n", "villa 0.0 1.0\n", "beloved 1.0 0.0\n", "textoperator 0.0 2.0" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine 5 random DataFrame rows\n", "tokens.sample(5, random_state = 6)" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 3617., 562.])" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Naive Bayes counts the number of observations in each class\n", "nb.class_count_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before we can calculate the relative \"spamminess\" of each token, we need to avoid **dividing by zero** and account for the **class imbalance**." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
hamspamspam_ratio
token
very0.0179710.0053380.297044
nasty0.0005530.0035596.435943
villa0.0002760.00355912.871886
beloved0.0005530.0017793.217972
textoperator0.0002760.00533819.307829
\n", "
" ], "text/plain": [ " ham spam spam_ratio\n", "token \n", "very 0.017971 0.005338 0.297044\n", "nasty 0.000553 0.003559 6.435943\n", "villa 0.000276 0.003559 12.871886\n", "beloved 0.000553 0.001779 3.217972\n", "textoperator 0.000276 0.005338 19.307829" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# add 1 to ham and spam counts to avoid dividing by 0\n", "# convert the ham and spam counts into ratio frequencies\n", "tokens['ham'] = (tokens['ham'] + 1) / nb.class_count_[0]\n", "tokens['spam'] = (tokens['spam'] + 1) / nb.class_count_[1]\n", "\n", "# calculate the conceptual ratio of spam-to-ham for each token\n", "tokens['spam_ratio'] = tokens['spam'] / tokens['ham']\n", "tokens.sample(5, random_state = 6)" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
hamspamspam_ratio
token
claim0.0002760.158363572.798932
prize0.0002760.135231489.131673
150p0.0002760.087189315.361210
tone0.0002760.085409308.925267
guaranteed0.0002760.076512276.745552
\n", "
" ], "text/plain": [ " ham spam spam_ratio\n", "token \n", "claim 0.000276 0.158363 572.798932\n", "prize 0.000276 0.135231 489.131673\n", "150p 0.000276 0.087189 315.361210\n", "tone 0.000276 0.085409 308.925267\n", "guaranteed 0.000276 0.076512 276.745552" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# examine the DataFrame sorted by spam_ratio\n", "# only interpret them as relative ratio instead of actual ratio \n", "# since we added the one to prevent zero division\n", "tokens.sort_values('spam_ratio', ascending = False).head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "e.g. From the looks of it, the word claim appears a lot more often in spam messages than ham ones." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "83.667259786476862" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# look up the spam_ratio for a given token\n", "tokens.loc['dating', 'spam_ratio']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part 7: Tuning the vectorizer" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Thus far, we have been using the default parameters of [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html):" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "CountVectorizer(analyzer='word', binary=False, decode_error='strict',\n", " dtype=, encoding='utf-8', input='content',\n", " lowercase=True, max_df=1.0, max_features=None, min_df=1,\n", " ngram_range=(1, 1), preprocessor=None, stop_words=None,\n", " strip_accents=None, token_pattern='(?u)\\\\b\\\\w\\\\w+\\\\b',\n", " tokenizer=None, vocabulary=None)" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# show default parameters for CountVectorizer\n", "vect" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, the vectorizer is worth tuning, just like a model is worth tuning! Here are a few parameters that you might want to tune:\n", "\n", "- **stop_words:** string {'english'}, list, or None (default)\n", " - If 'english', a built-in stop word list for English is used.\n", " - If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens.\n", " - If None, no stop words will be used.\n", " \n", "Removing common and uncommon words is extremely useful in text analytics. The rationale behind it is that common words such as 'the', 'a', or 'and' appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be, or in a sense, they carry less semantic weight. On the other hand, there may be words that only appear one of twice in the entire corpus and we simply don't have enough data about these words to learn a meaningful output." ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# remove English stop words, these are usually commonly appeared words that\n", "# probably don't provide a lot of usual messages. \n", "vect = CountVectorizer(stop_words = 'english')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- **ngram_range:** tuple (min_n, max_n), default=(1, 1)\n", " - The lower and upper boundary of the range of n-values for different n-grams to be extracted.\n", " - All values of n such that min_n <= n <= max_n will be used." ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# include 1-grams and 2-grams\n", "# it might be the case that word pairs has relationships with the output,\n", "# e.g. with 2-grams \"not happy\" won't be split into \"not\" and \"happy\" and it might be predictive\n", "# however, it still might introduce more noise than signals\n", "vect = CountVectorizer(ngram_range = (1, 2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- **max_df:** float in range [0.0, 1.0] or int, default=1.0\n", " - When building the vocabulary, ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words).\n", " - If float, the parameter represents a proportion of documents.\n", " - If integer, the parameter represents an absolute count." ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# ignore terms if that appear too common, here when it\n", "# appears in more than 50% of the documents\n", "vect = CountVectorizer(max_df = 0.5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- **min_df:** float in range [0.0, 1.0] or int, default=1\n", " - When building the vocabulary, ignore terms that have a document frequency strictly lower than the given threshold. (This value is also called \"cut-off\" in the literature.)\n", " - If float, the parameter represents a proportion of documents.\n", " - If integer, the parameter represents an absolute count." ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# only keep terms that appear in at least 2 documents\n", "vect = CountVectorizer(min_df = 2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Guidelines for tuning CountVectorizer:**\n", "\n", "- Use your knowledge of the **problem** and the **text**, and your understanding of the **tuning parameters**, to help you decide what parameters to tune and how to tune them.\n", "- **Experiment**, and let the data tell you the best approach!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Putting it all together" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the cell below, we're putting the basic workflow of text classification's script into one cell for future reference's convenience." ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "accuracy: 0.988513998564\n", "auc 0.993681761231\n" ] } ], "source": [ "import pandas as pd\n", "from sklearn import metrics\n", "from sklearn.naive_bayes import MultinomialNB\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "# using the example spam dataset\n", "# read it in, extract the input and output columns\n", "sms = pd.read_table('sms.tsv', header = None, names = ['label', 'message'])\n", "sms['label_num'] = sms['label'].map({'ham': 0, 'spam': 1})\n", "X = sms['message']\n", "y = sms['label_num']\n", "\n", "# split X and y into training and testing sets\n", "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 1)\n", "\n", "# convert both sets' text column to document-term matrix\n", "vect = CountVectorizer()\n", "X_train_dtm = vect.fit_transform(X_train)\n", "X_test_dtm = vect.transform(X_test)\n", "\n", "# train the mutinomial naive bayes model, \n", "# predict on the test set and output the accuracy score\n", "nb = MultinomialNB()\n", "nb.fit(X_train_dtm, y_train)\n", "y_pred_class = nb.predict(X_test_dtm)\n", "accuracy = metrics.accuracy_score(y_test, y_pred_class)\n", "print('accuracy:', accuracy)\n", "\n", "# train the logistic regression model, \n", "# predict on the test set and output the auc score\n", "logreg = LogisticRegression()\n", "logreg.fit(X_train_dtm, y_train)\n", "y_pred_prob = logreg.predict_proba(X_test_dtm)[:, 1]\n", "auc = metrics.roc_auc_score(y_test, y_pred_prob)\n", "print('auc', auc)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "input sparsity ratio: 0.998228130998\n" ] } ], "source": [ "def sparsity_ratio(X):\n", " \"\"\"\n", " The rule of thumb is if the sparsity ratio is greater than 90% \n", " then you can probably benefit from sparse formats, that are the \n", " default representation of document-term matrix for scikit learn\n", " \n", " Parameters\n", " ----------\n", " X : scipy sparse matrix\n", " document-term matrix\n", " \n", " Returns\n", " -------\n", " sparsity : float\n", " the ratio of elements in the matrix that are zeros\n", " \"\"\"\n", " sparsity = 1 - X.nnz / np.prod(X.shape)\n", " return sparsity\n", "\n", "\n", "print('input sparsity ratio: ', sparsity_ratio(X_train_dtm))" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def vis_coef(estimator, feature_names, topn = 10):\n", " \"\"\"\n", " Visualize the top-n most influential coefficients\n", " for linear models.\n", "\n", " Parameters\n", " ----------\n", " estimator : sklearn-like linear model\n", " An estimator that contains the attribute\n", " ``coef_``.\n", "\n", " feature_names : str 1d array or list[str]\n", " Feature names that corresponds to the coefficients.\n", "\n", " topn : int, default 10\n", " Here topn refers to the largest positive and negative coefficients,\n", " i.e. a topn of 10, would show the top and bottom 10, so a total of\n", " 20 coefficient weights.\n", " \"\"\"\n", " fig = plt.figure()\n", " n_classes = estimator.coef_.shape[0]\n", " feature_names = np.asarray(feature_names)\n", " for idx, coefs in enumerate(estimator.coef_, 1):\n", " sorted_coefs = np.argsort(coefs)\n", " positive_coefs = sorted_coefs[-topn:]\n", " negative_coefs = sorted_coefs[:topn]\n", " top_coefs = np.hstack([negative_coefs, positive_coefs])\n", "\n", " colors = ['#A60628' if c < 0 else '#348ABD' for c in coefs[top_coefs]]\n", " y_pos = np.arange(2 * topn)\n", " fig.add_subplot(n_classes, 1, idx)\n", " plt.barh(y_pos, coefs[top_coefs], color = colors, align = 'center')\n", " plt.yticks(y_pos, feature_names[top_coefs])\n", " plt.title('top {} positive/negative coefficient'.format(topn))\n", "\n", " plt.tight_layout()" ] }, { "cell_type": "code", "execution_count": 47, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAABqUAAARmCAYAAAC2rb+QAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAAWJQAAFiUBSVIk8AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzs3Xm4JFV5P/DvKwOKIgqiggsQo0bc\nN4y7iDEmJqISNXEf1KiJxiURo/m5axKNxrhvUUBQE9egxjUuuCcSFLe4YHRcEBERkE0Y4Pz+qOrc\n9tLdt++dW9PD8Pk8z3mqq+rUqbe2nmf6vadOtdYCAAAAAAAAQ7rMogMAAAAAAABg+ycpBQAAAAAA\nwOAkpQAAAAAAABicpBQAAAAAAACDk5QCAAAAAABgcJJSAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAA\nADA4SSkAAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklIAAMDC\nVdUBVdX6snEd2jumb2vTlkfH9qaqNvX3xzGLjoXJqmr/qnp7Vf24qs4f+344YFm936qqw6rq+1X1\nq+XfI1W1cdq26xTnoO0DAGxvJKUAACBJVe1QVTfuf2B8ZVV9oarO2ZJESVXdsKpeVVXfrqqzq+q0\nqjquqp5ZVbsPcBhsB6rqGf0995OqqkXHA1tbVf1eks8neUCSaybZcUq9myU5LskhSfZNctmtFCIA\nAGskKQUAwIouJX8J/vgkX0tyeP/5tkl2XmtjVfXYJF9K8rgk109y+SRXTnLLJM9L8vWquv0Wxnyp\nsd49qbZx9+6n72uttYVGcglxKfmOujT5pyQbkpyV7vv4t5PcpC/HjtV7YZIrJLkwydOT3H6s3tFb\nMd5LDM8KALBoGxYdAAAAbCPGe6RckOSbSc5NcptVN1R1vySv6ds8LcnfJ/lsksslOTjJnyfZK8n7\nq+q3W2vf3bLQL/laa8fk16/BlrZ3wHq1tTVV1bWS3Kqffe8iY9metdb2XXQMTFZVeye5QT/7+tba\nq6fU2zHJXfvZo1trL5xUr7V2RJIj1jnMrdY+AMD2RlIKAAA6X07yF+leBXV8a+3cvkfOqpJSVbVL\nklemS7Cck+ROrbVvjFX5ZFV9Ocmbkuye5KVJDtry8NlOHJTu3jkzyccXHAsswjXHPn97Rr09svS6\nvln1AADYhnh9HwAAJGmtfaq19qrW2hdaa+duQVMbk+zZf/6HZQmp0b4OS/KpfvZeVXWjLdgf25f7\n9NMPt9bOX2gksBjj40JtXod6AABsQySlAACYajSOT7pxlkY+OTYexagcMWX7favqH6vqq1V1RlX9\nqqp+WFVvr6p7rrDvXxv3ojqPqKpPVdXPqurcqjqhql5eVdec1dZWdr+xz2+aUW983f2m1pph0jhL\nVXVwVX2oqk6qqvOq6gdV9aaqusEKzY3aXPM167ffo6qeVVVfqKpTq2pzVZ1WVd+tqmP6dTef51jG\n4mlJPjlW/fAJ9+Axy9o7pl++adny+49t8+g5juc3x+q/bka936+qo6rqf6vq7L58p6reWFW3WGk/\nfRtXSnJAP3v0snUXGwemqu7dX+uf9td6U1X9c1VdZ8797VJVf1lVHx+7X06tqs9X1d9U1ZXnaGOn\nqnpSVf1Xf7+cWVXfqKq/q6qr9nU2TbpGY21crr9vX19Vx/X3y+i++XJVvXTaMdUav6OmxVRVh45t\n87tzHP/dxuo/bUqdy1TVH1fVu/rn8dyq+mVVfb2qXlFV111pP/Pqr8cjq+q9/XN7bl++V1Xv7tft\nMmP7m1TVa6vq2/21PKe/p4+oVYyBV1U36o/ta/11PK+qftzHcJ+qutirOkfPbGY/60f0pSX5/li9\nZ9eU74NJz86MuPeuqr+tqv+sqlP6+/CMqvpSVb1q0varbH+fqnphVf13/6yd3z97H6yqh1fV1LfZ\njI67P/ZU1Y5V9YSq+mJ/js/u76nnVtUVJ2y/Rf+eAwCsm9aaoiiKoiiKokws6X4gb3OUIyZs+5gk\n562w3dFJLj9l3xvH6t09yQdmtHNGkgMHOP7xGDbOUX/D2DGfsELda421/Yl1uD4bk7xhxjn6VZIH\nrdDell6z2yY5dY775WMrHcvY8n3nvAePWdbeMf3yTcuWXzbdOF8tyWfnOMfPHdvH7Ses3z3JR+eI\n7++T1Ar7emBf9/wkV55xLx6Y5LAZ+zo9yW1W2Nfdkpy8Qsw/S3LnGW1cLcnxM7Y/McnNkmyadI3G\n2jl6jvN3XpKHr9d31LSYklwj3ZhyLclb5rg/3tzXvTDJtSas3yfdK0FnxbY5yePX8h2wbF/7p0vU\nrHQunjRl++f1xzFr29cl2WFGDDsk+cc52vlwkitNeWZnXse+zP19kF9/dg6YEfvT0z17K7U969mc\n1f5TsvL3638nueaU7ceP+6pJjp3RzteTXGU9nhVFURRFUZT1LsaUAgBglmOT3CTJvZO8oF/2iH75\nuNPGZ6rqIel+vEySc5O8PN2PkOckuWmSv0qyX9/uu6vqnq21NiOOv033g+unkrwmyf+mG0/kgUke\nlmTXJO+vqlu01r6z+sNcN9dLslP/+X9mVWyt/biqzkxyxSQ3XId9/3m6c/SVJP+U5BvpzstB/brL\nJjmyqk5srX1q+cZbes2qaqck70yXpLkw3V/jfyDJSel+5L96kpsn+b10P3zO68R09+D+6RIxSfKM\nJO9dVu/seRprrZ1XVe9I8ugkd6iq67TWvjdjk4f00++21j4/vqLvcfLpJDdKd0zvSZdg+X66RMNN\nkzwu3XE/LV1i8Lkz9nXvfvqp1trpM+o9L8kd0p3fw/v9XSXdj+MPSnKlJG+tqv1aaxcs37iq7p7k\ng+mSqL9I90wdl+RH6e6Zu6UbX+2qST5QVbdty15DWVU7JPn3dEmnpPtOeEWSb/b7/8P+2N+T5PIz\njiV9HN9J8r6+nR+mO3/XTnKndNdqlyRvqqrvtdY+M7btmr6jpmmt/aSqPp7kd5Pct6p2aa2dNalu\nVV0+ycH97Cdbaz9etv4aSb6QZK/+eI5K8h/pEmKV5NZJnpjkN5O8sqrObK29eZ44J8Ry63T34uX6\nRR9M8i9JTkj3PO6d7lxO7JVZVc9I8sx+9rQkL0n3fXtBunH1/jrdWE+P6es8dkoob0ry8P7zf/fz\n3013n+2b5KHpXlF5j3TfI/dorV3Y1z8kyRUy+1kfXceXpEsgfqSff226+3hkru+Dkap6SbrvuCQ5\nK12C/2NJftrHtF8f871W0+5Y+89J8ux+9tt9vN9Ol/i9RpI/Svdv2a2SfLh/5mYdw7+le/Zek+78\nnJLkOkmemu563ShdcnDj2Dbr+qwAAKzZorNiiqIoiqIoyrZfMudfgvd1d0vXc6ml+3Hv1hPq7Jzu\nB9RRmxtX2GdL9+P7xXqaJHnUWJ3/GPC4LxbjhPr3GKv/qjnqf2Os/k5riO+AZefoo5Pa6eMa9f74\ndpLLrPc1S9d7Z7TuiSvEfZUVjmXS/TBz/YT6x/R1N01Yd/uxtp4zo407jtV71oT1r+/XnZ7kdlPa\n2JDkHVnqEfMbU+rtNHYNLtZrZsLzMDHu/HoPqoMmrN813Q/hLd2P7lec0s71xutNWP+4sf28NxN6\nz/T3xOaxesdM29cK1/LaSX7ct/HJKXXGz88Bc9wfm6bFlC6xt+K9li5hOar3sAnrP9Kv+2GS35rS\nxhWSfLav94tp12OFY9kpSz2kLpoUy7L7ca9ly35r7Dr9ZNI9mu6PAL456xxnqadfS/JnM2J44li9\nh0xYf8Cc53/flZ6Hee6NdD1xR+tPSLLvjLb2SrLjKtu/Q39dWroek5eZ0vZ9x+o9Y8L6I8b2sznJ\n70yos3OW/l05P8keW/qsKIqiKIqirHcxphQAAOvtkHQ/fCfJ81tr/728Qmvt3HR/FT4anP5JK7T5\ns3Q/1LcJbb0xXTImSX6n5hw7aSDj43hM7F2xzHidi40BskrnJzmktXb+8hWttY9kaQyr66frBTJu\nPa7ZnmOfPzkr0NbaqbPWD611PZ6+288+dEbV0bqW5MjxFVV1rXTnLUn+X2vtC1P2dUGSP0t3fTbk\n13sujLtrlq7B8l5gy30503tcvXjs810mrH9suh5Q5yR5YGvtzEmNtNZOSNcjK0nuVlW/sazKn/fT\nc5I8qi31dhlv4xPpEncz9fuatf5HSf6hn71LVe2+Uptb6N+SjM7Lw2bUG90fZyV59/iKqrpNlp6z\nP2utfXtSA63rDTPqdbRb1ja+3IPTJWiS5BWttSOnVWytXdBaO2nZ4scl//cWlSe31r4/Ybufp/sD\ngJFJ39mjnlbvaq29dkYML0/XiypJ/nRava3oGf20JXlAa23TtIqttZNaa5unrZ/i/6XrGffFJH/T\nWrtoStv/lq5nYbLyeXlVa+1jE9o4N8mr+tkdk9xulbECAAxOUgoAgPU2+iH2oiT/PK1S/8Pfh/vZ\nm1XV1Wa0+Y42+1VGb5yw/0XYeezzxZJDE5w3Zdu1+Ghr7cQZ62edo/W4ZuOvLntEVdXscBfuqH56\nnaq64/KVVXXZJA/oZz8z4YfqP0z3o2/SvSZtqj4J97V+9g5Tqt2nn36pT8LM8tZJCdp+X9/MUrLz\nOhOqjF4397HW2ikr7OeYsc//F3dV7ZWlV05+YIV2Dl9hHxdTVbtV1XWq6kZVdeOqunG6V0om3Y/7\nN19tm6vR/7D/rn72gKq69oQY90r3msMkec+E76fReT4j3av0Zu3v6+nGYkum3x+zHDT2+cVTa003\nev5PzbLk2rjW2ueydB8f2L/CMUnS/zHAfv3s2+bY5+gVorcdb2drq6rd0vWITJKPt9a+vM7t75Ku\nJ1aS/Ou053bMMf107z7xPc1bZqwbfx3fpO8AAICFkpQCAGC93aSffqe19osV6o6P0XPTGfW+uEI7\n/zVnO0P71djnnabWWnLZsc/nTq01n5XO0ZezlChbfo7W45p9Lt3rvZLu9Vzfqqq/q6p7VNVVVmhz\nEY7K0thWk3rDHJTkyv3nST1PbjP2+dSqarNKurFiku71X7+mT+CNEgtHzxH7t1ZYPxoTZtfxhf2P\n/6M4Dpoj5q+NbT4e903GPq90330lcyRoq+rWVXVkVf003Wvs/jfJ1/sYvpZujJ+RPVZqbx2Mrnll\ncm+6hyTZYVndcaP740pJLprjXI+ekYvdH3MYXdMTVkhMX0yffL1+P3tsmzAG2TKj5/+KScZ7z40/\nD++Z43hH4zftlG4cukW5RZZ+F5nZw3ONbpmlXmgvneO8vHJs21n3wqzvgPHv8F2n1gIAWBBJKQAA\n1tvox9WfzlF3vM6sxMXJK7Qzvn6RCZDxV6HtMkf98ToTX6O2CjPPUf9j8+jHyuXnaIuvWf/6tj/M\nUpLi+kmenq5n1SlV9bWqem7fw2Th+leUfaafvX9VXW5ZlVEi4twk75zQxKyefbNcfsKy/ZNco/+8\n0qv7kmRWr8Gk6/GWLCVNRnbL0g/kqzUe93gS4WezNlp2301UVX+d7r55aJKrzxHLlvYqnMenkvyg\n/zwpKTVa9qNMTmas5/2xkqv205+sYdvd0iXeki37zl7r8SZrO+b1ctWxz2s5fysZ5Lys0HN4/PWA\nC+uFBgAwzVr/QwIAAFzcD8c+X+yVXxOM6pw8aSyoS5rW2veS/HZV3TXd6+junK5XzQ5JbtyXp1TV\nI1tr/7q4SP/PkelivHKSe6VPPlXVVZP8Xl/n6NbaLydsO/q/VEtysyz1ulrJpOt87376/dbaV+ds\nZy3G//93dJbGAJrHzOTTWlXVXZK8sJ89JclLknwiyfeTnDl6LqrqwCQfH202RCzjWmutqo5KN97Q\nDapq/9basX0sN89Sb7G3TBkjaHSuT8zSvTSPlRKO26rxe+uQLI0ZNY8hkkHbivHz8v+SvG8V215s\nbC8AgO2BpBQAAOvt1HS9Pvaco+54nVOn1lq598T4+lntDO2EJJvTjTV0w1kV+/FCrtjP/s867Hvm\nOaqqDVnq4bL8HK3rNWutfTJ975GqumK6xM+DkvxJur/+P6qqvtxa+/Yc+xvSO9O9LmvndK/wG/WI\n+pMsjRc16dVsSZdASboEyU/nGJ9pltF4UvP0ktoSp6ZLnlWSnfqxjNZivOfTzJ4gy+67SR7TTy9M\ncpd+TKxJdps/vHVzZLqkVNLdH6Oxeh66rM4kpyT5rXSv7/vGHGMJbYlT0iW4r7FSxQl+kaV7Ykue\n//H7/+wtuLe2tvG413L+VtP+5kvQeQEAGIzX9wEAMI/V/KA66ulx/apaaayQ20/YbpLbzFiXJL89\nZzuDaq1tztKYK9ddYaD6u459/vQ67H6lc3SLLI1ztfwcDXHNkiSttTNbax9orT04ydP6xRuS3G+l\nbZc3tcr6KzfY9YAaJYJ+r+8hlSyNMXVSkv+YsvlxY5/vvNYYquq6WUpgzjOe1Jr19+fomt22qnac\nVX+G8bGm9l+h7k0ze3y1UY+jr85ISCXJrVfYzxD3xwlJ/rOf/ZOq2rEfl+tB/bJjW2vTxvYZ3R+7\npBtXaEijXknXq6prrmbDvifaKDl86/74Zhk9/2fm13vyrMvzsABfztLr7u46q+I6tL+tnJchE6QA\nACuSlAIAYB7njn2+7Ap1P9pPL5PkkdMqVdU+WXqt1fGttVmvB3tAVV1hxvpHjX2elkTYWt499nnq\n8S9b96512O/vrvCD9Pg5+uiydUNcs0k+Mvb5qlNrTbaae3A13txPNyR5UFXdIEsJkLf1Y2VN8r50\nvXuS5C+raq3/txr1kjo1yWfX2MZqvKef7p7Z9+dUrbWTstS77w+qao8Z1Teu0Nzo7R1Tx8/pn/2H\nr9DO0PfHHknumeTuWeotNK2XVLJ0npPk0HWMZ5LxHnZPWcP2o+d/jyQHT6tUVbfLUhLx48ueja8k\n+V7/+WFVtSVjKW01rbXTsvRHAXerqlusc/u/SDc+WZL8flXdaD3bX6OhnhUAgLlISgEAMI/xMT+u\nv0Ldw5OMxuB5ZlVdrJdAVV0u3Y+9o54aL1uhzasleWVVXWwsmap6VJLf7Wc/tkJvi63h8CyNv/PU\nST9CVtUhSe7Sz35gnV7ptFOSwyf1fqmqe2QpAfGdXDwptcXXrKru1Cd0ZhkfW+d7U2tNtpp7cDX+\nI12PqKTrIfWwsXVvvnj1Tj9+1mj97ZO8pn9V3URVdZmqul9VLX+t42g8qX+fkQBbT6/I0mvXXtrf\nG1NV1R5V9RcTVr2mn14hyRsm9bDpx4t67ArxfKefXq+qbr98ZX9OD0uy1wrtDHV/vD3Jef3n8fvj\n/CT/Mm2j1tqn042NlSR/XFXPmFY3Sapqp6p6ZFWt9KrSSd6WpefpCVX10GkVq2pDVS0/l69OckH/\n+WV98nn5dldJ8saxRb/2/PevJ3xOP7trkveO9TycFsttquqes+psJS/op5XkHVW177SKVbXnGnoY\nPidd76QdkvxbVV1nVuWq2q+q/mSV+1iNoZ4VAIC5GFMKAIB5fDnJ2el+gH5qVf0sXU+Jzf36M/re\nE2mtnV5Vj0tyVLoxkz5TVS9Llwg5O91f2j8lS68s+3Bm9zhIki8mOSTJdarq1Un+N91f9T8wSz0o\nzknyuC05yKrauGzRHcc/T8iJvau1dtb4gtbaWVX1hHQ/WF8+3fH/XbpeMJdL1xNhFOfpSf5yS2Ie\n88V0vTi+WFUvTXd9rpgu6fG4dD+IXpjk0a21i8Y3XKdrdrckz6qq/0rywSTHJ/lpv+6afRyja3Vq\nZvygP0lr7cdVtSnJvkkeWVXfSPfKsFHC4JzW2g9X02bf7oVV9bYkf5XuNWujH4yPb619bfqWSZIn\n9tvcPN3YSHetqjemG3vo9HTPy28kuW2S+6Ybs+bu6XsZ9T/ajxIxQ48nleT/rvX90/Va2znJh6rq\n6HQ9e05Il2zZPcmNkxyYLpF4Srqxt8a9Ll0vqFunO7bPV9XL070K7opJ/jDJ45P8qJ+/aia/NuyI\nJAel+4PJD1TVS9I9K+eke/XfXyS5Wb/sjhO2H5n7O2o1WmunVdW/J/mj/phGicMPttZWGr/uIUn+\nK914T8+vqvukO97jk5yV7rxcP909cJ905/16SU5eZYybq+qPk3wm3XfMkX1S423prumFfQx3SPLH\nSV6asaRSa+07VfXcJM9Pd48e11+HT6dLVt0myV8nGb2O9PWttVHvn/E4jqqqO6frlXnbJN/qn4dj\n0n0X7NS3f6t01/wmSf423ffFwrTWPt4f71OSXDfJV6vqDekS1ien+x6/Qbpn4aB0PeVOX0X7n66q\nZ6ZLfl0vydeq6oh0368/TnfvXz3d98gfpDt3b03yr+txfBMM8qwAAMyttaYoiqIoiqIoK5Ykz073\no/KkcsSE+o9NlzCYtk1L90P85afsb+NYvbsn+fcZ7ZyR5MB1OMZZsU4q+85o63ErHP9Pk9xxC+M9\nYKy9jUleP2N/5yV58ArtrfmaZak3wErlpCS3W+lY5rgnlpdjltU9pl++aY7zeNMJ7T15zmuwa5J3\nznnsF4wfe5JH9MvPmXROZxz7ASvU3TTpnCyrc7uxeiuVb0xp42rpEizTtjsxXULph/38h6a089oV\n9n9UugTZSvfHs2e0ccRqz9FY3YMmtHffOe+PayT55Jzn+VdJrr0F3we3GTvXs8qTpmz//HQJrFnb\nvi7JDjNiqCTPSpfcnOeYD13Ld0Ffb9+xes/Z0mcnyTPTJWZWivnKa2z/0emSkfOcl1dP2P6I0foV\n7oMVz0tW+e+5oiiKoijKehav7wMAYC6ttecmeVC6v+4+OUt/VT2t/uvS/XX5PyX5epIz0yU8fpTu\nR/w/aK3du7V2zhy735zkXuleQffpJD/v2/peuh4cN2ytfWL65ltfa+3V6XrRvDbJd9ON43FGuh/x\nn5PkRq21dR1DqLX2mHQ9Oj6S7hqdn+58H5bkZq21t66w/ZZcsxen6+3xinS9Wr6X7gfYzeleZ/iJ\ndL3Crt9a+8Iaj++IdL0V3psu4XH+WtqZ0O5X012XkQvS9TKZZ9tfttbuny4h8OokX0tyWrof989M\n8q10Y4b9eZJrLTv20XhSH5vzOVg3fRzXS/eD+nuS/CBdcmxzuufri+mO56B0PTgmtfGzdMf95HS9\nw85M1wPjm0lemOTmrbWvJLlyv8kZU9r5syQPSHePnN7HcGK6cbvu21p7aJKLJm27rJ1VfUetwofS\n9RYbOTXJB+bZsLX2k9baXdMl1g9Ldz/8Mt39cUa6++Wt6XqC7tla+9Fag2ytfTFdz6s/T/cd8NN0\n5+BX6XqXvjPd6wdfP2X7Zya5Rb/+O+mu5blJvp+uZ+QdWmuPbTNeM9k6z0vX4/D5ST6X7vm/oG/r\nB31sz0hyk9bai9d6vOuttfb8dN9/L0n3fXB6uut0epIvJXl5unMwdy+pZe2/Ick+SZ6W7l7/abrv\nsF+lu98/ma7n2G1ba1vU63eOWIZ6VgAAVlSttUXHAAAAF9O/Su/wfvaurbVjFhfNtqmqDkj3Q2aS\nHNInbbgEqKrLp0v+7Jzkka21wxYc0iCqau90iYgkeV5r7dmLjAcAAFgsPaUAAAC2vnukS0hdlOT9\nC45lSA8d+/z5hUUBAABsEzYsOgAAAIBLobOTPDfJKa21U1aqvC3qe0Gd3Fo7b8r62yX5m372R0k+\ntrViAwAAtk2SUgAAAFtZa+2j6cZzuSQ7OMnTquod6cZ6+0G6nl97J/mDdL2kdurrPmHWWEQAAMCl\ng6QUAAAAa3X1JH/Rl0k2p0tIHb31QgIAALZVklIAAACsxb8mOT/J3ZPsl+SqSXZNcmaS7yf5eJLX\ntNY2LSpAAABg21KttUXHAAAAAAAAwHbuMosOAAAAAAAAgO2fpBQAAAAAAACDk5QCAAAAAABgcBsW\nHQCLVVXfTzcY8aYFhwIAAAAAAGx79k3yy9bab2xpQ5JS7Lrzzjvvvt9+++2+6EAAAAAAAIBtyze/\n+c2ce+6569KWpBSb9ttvv92PO+64RccBAAAAAABsY251q1vlS1/60qb1aMuYUgAAAAAAAAxOUgoA\nAAAAAIDBSUoBAAAAAAAwOEkpAAAAAAAABicpBQAAAAAAwOAkpQAAAAAAABicpBQAAAAAAACDk5QC\nAAAAAABgcJJSAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSl\nAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklIAAAAAAAAMTlIKAAAAAACAwUlKAQAAAAAAMDhJ\nKQAAAAAAAAYnKQUAAAAAAMDgJKUAAAAAAAAYnKQUAAAAAAAAg5OUAgAAAAAAYHCSUgAAAAAAAAxO\nUgoAAAAAAIDBSUoBAAAAAAAwOEkpAAAAAAAABicpBQAAAAAAwOAkpQAAAAAAABicpBQAAAAAAACD\nk5QCAAAAAABgcJJSAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA\n4CSlAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklIAAAAAAAAMTlIKAAAAAACAwUlKAQAAAAAA\nMDhJKQAAAAAAAAYnKQUAAAAAAMDgNiw6AAAAtn/7v/gTiw4BAAAAVnTsoQcuOoTtmp5SAAAAAAAA\nDE5SCgAAAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGJykFAAAAAAA\nAIOTlAIAAAAAAGBwklIAAAAAAAAMTlIKAAAAAACAwW1YdADbo6ramGTfJEe31o5f57af0398WWvt\n9PVsGwAAAAAAYCiSUsPYmOQuSTYlWdekVJJn99MjkkhKAQAAAAAAlwhe3wcAAAAAAMDgJKUAAAAA\nAAAYnKTUOqqqjVXV0r26L0kOr6o2VjZV1V5V9fN+/m+ntHPHqrqwr3Nwv+yIvu2R7y9r+4hhjw4A\nAAAAAGDtJKXW17lJTk6yuZ//ZT8/Kqe01k5K8ph+/V9X1e3HG6iqKyY5Mt21eXNr7T39qjP6NkZ+\nvqztM9b9aAAAAAAAANaJpNQ6aq29vbW2Z5LP94ue2Frbc6zs39d7d5I3J9khyVFVtctYMy9P8htJ\nNiV5wljbT+zbHtl/WdtPHPC1gFZFAAAgAElEQVTQAAAAAAAAtsiGRQdwKfaEdK/5u066RNQjq+o+\nSQ5JclGSh7XWfrleO6uq46asusF67QMAAAAAAGAaPaUWpE84PTRdAuoRVfXYJG/oV7+4tfaZhQUH\nAAAAAACwzvSUWqDW2mer6kVJnp7ktf3i45M8a4B93WrS8r4H1S3Xe38AAAAAAADj9JRavGcn+UH/\n+aIkD22tnb/AeAAAAAAAANadpNTi/U6SffrPl0ly+wXGAgAAAAAAMAhJqQWqqqskOayf/Xo/fWlV\n/eaCQgIAAAAAABiEpNQwLuqntUK9NyTZM8k3ktwmySeSXCHJUVW1w5Rt2pxtAwAAAAAAbDMkpYbx\ny3565WkVqmpjkoOTbE7ykNbauUk2Jjkjye2SPH2tbQMAAAAAAGxrJKWG8Y1+enBVXWn5yqraN8nL\n+9lnt9aOT5LW2o+SPL5f/qyqutWMth82ozcVAAAAAADANkVSahhHJTk/yR2T/LyqTqyqTVX12aq6\nTJIjk+ya5HNJXjS+YWvtLUnemWTHdK/x23lZ22/sp09KclZV/aBv+yUDHg8AAAAAAMAWkZQaQGvt\nW0nunuTD6V7Ht2eSfZJcK8mhSe6U5KwkD2utXTShiccmOSnJfrl40urwJH+a5ItJLkhy7b7tPYY4\nFgAAAAAAgPWwYdEBbK9aa59O8ukpq180Zflo218kucaM9W/MUo8pAAAAAACAbZ6eUgAAAAAAAAxO\nUgoAAAAAAIDBSUoBAAAAAAAwOEkpAAAAAAAABicpBQAAAAAAwOAkpQAAAAAAABicpBQAAAAAAACD\nk5QCAAAAAABgcBsWHQAAANu/Yw89cNEhAAAAAAumpxQAAAAAAACDk5QCAAAAAABgcJJSAAAAAAAA\nDE5SCgAAAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGNyGRQcAAMD2\nb/8Xf2LRIQAAAKzZsYceuOgQYLugpxQAAAAAAACDk5QCAAAAAABgcJJSAAAAAAAADE5SCgAAAAAA\ngMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAA\nAGBwklILVlWbqqpV1QGLjgUAAAAAAGAoklIAAAAAAAAMTlIKAAAAAACAwUlKAQAAAAAAMDhJKQAA\nAAAAAAZ3qUhKVdWmqmpVdUBVXbOqXlNV36uq86rq+GV171hV/1pVP+7Xn1pVH6uqB1ZVTWj7gL7t\nTf38varqk1V1WlWdVVVfqKoHrTLew/o237VCvef29T6/mvYBAAAAAAC2tktFUmrM9ZMcn+TPklw9\nyebxlVX1oiSfSfLHSa6Z5FdJdktytyRvS/K2qpp6zqrqSUnel+Qu/aKdk9w2yVur6lWriPON/fRe\nVXWVKfu6TJKH97OHraJtAAAAAACAre7SlpT6xyQnJblDa+0KrbVdktwvSarqiUmemuTkJI9OcuXW\n2pWSXCHJnyT5aT/96yltXzXJPyQ5MslerbXdkuzR7zNJHjdvj6nW2ueT/E+SnZI8eEq1A5Psk+Ts\nJG9fqc2qOm5SSXKDeWICAAAAAADYEpe2pNQFSe7eJ32SJK2171bVlZO8IF3PqHu01v65tXZGv/7c\n1trbkxycpCU5tKp2mtD25ZMck2Rja+3kftvTWmtPSfLmvs5zJ70CcIpRb6lDpqx/RD99V2vtzDnb\nBAAAAAAAWIhLW1LqyFHCaJk/SrJLko+11r4yacPW2heSfD/d6/xuNaX9v2+ttQnL/7afXjfJzeaN\nNcn5SW5eVbcYX9En0e7bz8716r7W2q0mlSTfmjMeAAAAAACANbu0JaW+MGX57fvpgVX102klybX7\netee0MbmJJ+b1Hhr7YR0rw1MklvOE2hr7dQkR/ezy3tLPTDJ5ZKc0Fr79DztAQAAAAAALNKlLSl1\nypTle/XTyye5+oyy41i95X7eWjt/xr5P7KdXXUW8o1f4PWjZKwNHr+47fBVtAQAAAAAALMylLSl1\n4ZTlo/Pw8tZazVGO2ErxfizdKwOvkuSgJKmqGye5dbpjefP0TQEAAAAAALYdl7ak1DSjcab23oI2\n9ljWm2m5a/TTab21LqYfn2o0ZtToFX6jXlIfaa39ZHUhAgAAAAAALIakVGc01tQBVbXzGtvYMcnt\nJq2oqutmKSn1pVW2e3i6XlH3qKp9kjykX37Y9E0AAAAAAAC2LZJSnXcmOTvJbkmeNatiVe02Y/XT\nq6omLe+nJ7TWjl9NYK21E5N8KMkOSd6abkyqU5K8bzXtAAAAAAAALJKkVJLW2qlZShw9rar+uaqu\nP1pfVTtX1Z2q6rVJPj+lmXOS3C3Jm6rqav12V66qF2XplXvPWWOIb+ynd+inb2mtbV5jWwAAAAAA\nAFvdhkUHsK1orb2yqq6U5HlJHpXkUVV1dpLzk1wpSwm8TVOaOCXJy5L8U5KNVXX6su1e3Vp72xrD\n+0CSk5Ls1c97dR8AAAAAAHCJoqfUmNbaC5LcLMkbkpyQ7vxcIV1C6CNJnprkTjO2f1mSg5J8qt/2\nV0n+M8lDWmuP34K4Lkjy/n722Nba19faFgAAAAAAwCJcKnpKtdb2XUXdryV5zBbs6/1ZSiDNU3/f\nOavetZ/qJQUAAAAAAFzi6Cl1CVBVd0tyvSRnJ1nrKwABAAAAAAAWRlJqG1dVeyR5cT97WGvtl4uM\nBwAAAAAAYC0kpbZRVfWSqvphkp8kuUWSnyd5wWKjAgAAAAAAWBtJqW3XHkmuneTcJB9NcmBr7WeL\nDQkAAAAAAGBtNiw6gEu61toxSWqAdjcm2bje7QIAAAAAACyCnlIAAAAAAAAMTlIKAAAAAACAwXl9\nHwAAgzv20AMXHQIAAACwYHpKAQAAAAAAMDhJKQAAAAAAAAYnKQUAAAAAAMDgJKUAAAAAAAAYnKQU\nAAAAAAAAg5OUAgAAAAAAYHCSUgAAAAAAAAxOUgoAAAAAAIDBbVh0AAAAbP/2f/EnFh0CAADATMce\neuCiQ4Dtnp5SAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSl\nAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklIAAAAAAAAMTlIKAAAAAACAwUlKAQAAAAAAMLgN\niw5gW1VVG5Psm+To1trxi42mU1X7JtmY5PTW2ssWGgwAAAAAAMAqSEpNtzHJXZJsSrJNJKXSJcme\nneQHSSSlAAAAAACASwyv7wMAAAAAAGBwklIAAAAAAAAMTlJqmaraWFUt3av7kuTwqmpjZdOy+jtV\n1eOr6jNV9YuqOq+qflBVh1XVfhPav0dVXdSX350Sw9P7fZ3RjyOVfr+f7Kvssyym1o+BBQAAAAAA\nsE2SlLq4c5OcnGRzP//Lfn5UThlVrKq9knwxySuT3DHJlZKcl2TvJIck+VJVHTzeeGvtI0lenaTS\nJbx2H19fVbdI8tx+9omttU3951OSnNZ/vmhZTCf3cQMAAAAAAGyTJKWWaa29vbW2Z5LP94ue2Frb\nc6zsnyRVtWOS9ya5WZKPJ7l9ksu11nZNco0kL0tyuSRHVdVvLtvNU5N8q6/3utHCqrpckrck2THJ\ne1prR4zFtX+SUYLrR8ti2rO19vZZx1VVx00qSW6w2nMEAAAAAACwWpJSa/fwJPsn+UyS32+tfaG1\ntjlJWmsntdaenOT1SS6f5MnjG7bWzk3y4HS9se5fVQ/tV70wyQ2T/DTJo7fKUQAAAAAAAGwFGxYd\nwCXYw/vpy0fJqAnemuQxSe6+fEVr7UtV9ewkf5fkVX3Pqyf0qx/RWjt1PYNtrd1q0vK+t9Qt13Nf\nAAAAAAAAy0lKrUFVbUhym3729VX16ilVd+in156y/kVJ7pluPKo39cte21r70LoECgAAAAAAsI2Q\nlFqb3ZPs1H++yhz1d560sLV2UVU9Kt34UkmyKclTtjg6AAAAAACAbYwxpdZm/LzdorVWK5UZbR0y\n9nmvJL85TMgAAAAAAACLIym1NqcmubD/vPdaG6mqOyU5tJ/9epLLJnlLVe00fSsAAAAAAIBLHkmp\n6S7qpxfr5dRa25zkv/vZ319L41W1a5Ij012Dw5IcmORnSW6a5AWrjQkAAAAAAGBbJik13S/76ZWn\nrD+in26sqpvNaqiqdpuw+BVJ9k3y/SRPaq2dkuRP+3V/VVV3nhHTlWbtDwAAAAAAYFsjKTXdN/rp\nwVU1KQn0piT/meRyST5RVX/a935KklTVnlX14Kr6VJInjm9YVQcneXi6nk8Pba2dmSSttff17V4m\nyZvH2+udkGRzkitV1R9t8RECAAAAAABsJZJS0x2V5Pwkd0zy86o6sao2VdVnk/97hd+9k3wuye5J\n3pDktKo6tarOSnJSkrckuXOSNmq0qvbs6ybJP7TWPrdsv09K8r10vaheMb6itXZ2kn/pZ99VVaf3\nMW2qqvut03EDAAAAAACsO0mpKVpr30py9yQfTnJGkj2T7JPkWmN1fpbkLkkenOSDSU5JcsV+9bfS\njRn1gCQvHGv6sCRXSXJ8kmdP2O9ZSR6WrhfVw/teVeMem+Tv+/Yv28e0T5Jd1nywAAAAAAAAA9uw\n6AC2Za21Tyf59Ap1Lkzytr7M0+Y956jzuSQ7TFl3bpK/6QsAAAAAAMAlgp5SAAAAAAAADE5SCgAA\nAAAAgMFJSgEAAAAAADA4SSkAAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGJykFAAAAAAAAIPbsOgA\nAADY/h176IGLDgEAAABYMD2lAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklIAAAAAAAAMTlIK\nAAAAAACAwUlKAQAAAAAAMDhJKQAAAAAAAAYnKQUAAAAAAMDgNiw6AAAAtn/7v/gTiw4BAAAu1Y49\n9MBFhwCgpxQAAAAAAADDk5QCAAAAAABgcJJSAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAAADA4SSkA\nAAAAAAAGJykFAAAAAADA4CSlAAAAAAAAGJykFAAAAAAAAIOTlAIAAAAAAGBwklLbuKo6pqpaVW1c\ntnzffnlbUGgAAAAAAABzk5QCAAAAAABgcJJSAAAAAAAADE5SCgAAAAAAgMFJSgEAAAAAADA4Sakt\nUFX7VdXrquo7VXVOVZ1eVV+rqldU1a3G6l22qu5fVUdW1Veq6udV9auq+kFVvXW8LgAAAAAAwPZo\nw6IDuKSqqr9I8k9JdugXnZ2kJblxX26a5IB+3d2TvKP/3JKc3k/3TvKgJA+oqke01o7aKsEDAAAA\nAABsZXpKrUFV3T/JK9IlpN6V5IattV1aa7sluUqShyQ5bmyTs/r6d06yS2tt99bazkn2SfKydMnB\nN1TV3gPGfNykkuQGQ+0TAAAAAABgRE+pVaqqHdP1kEqSf2mtPWh8fWvtF0ne2pfRsmOSHLO8rdba\nD5M8uap2TfKIJIckee4ggQMAAAAAACyQpNTq3S3JNZNcmOTQdWrz/emSUndYp/YuprU2cdyqvrfU\nLYfaLwAAAAAAQCIptRa37adfaa2dOO9GVbV7kscl+f0kv5XkSlkaj2rkGusSIQAAAAAAwDZGUmr1\nrt5PfzjvBlV1wySfGNs2Sc5Mcm6SlmSnJLslucI6xQgAAAAAALBNucyiA7iUODxdQupLSX4vyRVb\na7u21q7eWtszyf37erWoAAEAAAAAAIakp9TqndxP95mnclXtneQ26cagOmjKK/+uPmEZAAAAAADA\ndkNPqdX7z35606q65hz1r9VPT5kxBtXvbHlYAAAAAAAA2y5JqdX7eJITk+yQ5MVz1D+jn169qq62\nfGVV3STJg9YvPAAAAAAAgG2PpNQqtdY2J/mrfvaBVfWOqrrBaH1V7V5Vf1pVr+gXfTPJj9ONF/X2\nqrpuX2/Hqjo4yX8kOWvrHQEAAAAAAMDWJym1Bq21t6dLTF2U5P5JvllVZ1bVaUlOTfKGJDft616U\n5Al93QOSnFBVv0yXiHp3kvOSPGlrHwMAAAAAAMDWJCm1Rq21lya5RZLDk2xKsmOSluSrSV6e5Mlj\ndf8tyYHpekWd2df9QZKX9G38eCuGDgAAAAAAsNVtWHQAl2Stta8mecScdT+V5FNTVh+T7vV+k7Y7\nYMryTdO2AQAAAAAA2NboKQUAAAAAAMDgJKUAAAAAAAAYnKQUAAAAAAAAg5OUAgAAAAAAYHCSUgAA\nAAAAAAxOUgoAAAAAAIDBSUoBAAAAAAAwuA2LDgAAgO3fsYceuOgQAAAAgAXTUwoAAAAAAIDBSUoB\nAAAAAAAwOEkpAAAAAAAABicpBQAAAAAAwOAkpQAAAAAAABicpBQAAAAAAACDk5QCAAAAAABgcJJS\nAAAAAAAADE5SCgAAAAAAgMFtWHQA/5+9Ow/3ra7rBf7+yGE+zGIgCjRgaOYA4s1ERYxE0zSQDDXl\nZl67N7tqXq7DzaTUtDCn0hxS0ZxS1LLMNEMQxboIhZpiOZxQRAWRefTwuX/81pbtvvu39xn2Or99\nOK/X86zn+1vrO6zP2vvP9/NdCwCA274jTj1j1iUAAMBtzrknHz3rEgA2ip1SAAAAAAAAjE4oBQAA\nAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA6IRSAAAAAAAAjE4oBQAA\nAAAAwOiEUgAAAAAAAIxui4VSVbWuqrqqjtpS9wQAAAAAAGB1WDPrAmapqp6RZM8kp3X3uhmXAwAA\nAAAAcJu1JUOpryS5Icl1W/Cey3lGkoOSnJlk3UwrAQAAAAAAuA3bYqFUdz9kS90LAAAAAACA1WWL\nfVMKAAAAAACAbdeKhFJVta6quqqOqqoDquq1VfXVqrqxqv514ZgFc08arp85nD+yqj5eVVdU1TVV\n9U9VdeIy979TVb2pqi6uqhuGe7+iqvZauP4w/pSq6kxe3ZckHx/G9MKx8+b8eFW9flj7hqr6XlV9\noqp+vaq2m1LXmcN6J1XVzsN9v1RV11fVd6rq3VV1yDLPtm9VvaSqPjf8Pa6tqs9X1Yurau+l5gIA\nAAAAAKwWK/36vrskeW+S22fy7aibN2ZyVT0/ye8nuSXJ1Ul2TfJfkryzqn6ku1+5yJx7JPl4krmA\n5pok+2XyvahHJnntIre6Jsm3k+ybSTD3vSQ3zeu/fME9HjE8107DpSuH2h4wHI+tqkd397VTHm33\nJJ9Kcu8kNw7Pt2+SxyY5pqru291fWeTZjkzy1/Oe7aZh7k8Nx69W1THd/aUp9wUAAAAAAFgVVvr1\nfX+c5JIk9+/uXbt7bZLHbODceyV5QZLnJ9mnu/fMJFw6feh/ycKdQVW1YyZh0d5J/iPJkd29W5K1\nSX4hk+Do+Qtv1N0v6+79knx9uHRcd+837zhu3j1+PMm7Mwmkzkpy6FDbbkmemknI9HNJXrXEs/1e\nkr2SHDvUtDbJA5N8Y6j9JQsnVNVBSf5m6P+zJIck2XmY/9NJPprkzkneP22n1oL1zlvsSHLocnMB\nAAAAAAA210qHUt9Pckx3nzN3obu/vIFz90jygu5+UXdfMcz9dpInJrk0k1DoEQvmPC6T3Vk3JDm2\nuz81zLulu/8uyaOHdTfH8zIJgr6S5OFzu5K6+8bufkOS/zmM+7Wq+okpa+yYyd/lI929fqjv7Ex2\ncyXJL1bVDgvmvDjJnkle2t3/o7u/PMy7pbs/n8kusM8muVuSX9rMZwQAAAAAABjVSodSbxuCpE1x\nQ5L/7/V83X19ko8Mp3df0D23o+n07v7qInP/OcmZm1hPqqqSHD+cvqK7r1tk2J8nuThJZfqusNOn\nhHMfTNKZhFY/CLSqapckJ2Tyqr6XL7Zgd9+UW3eRHbP0kyTdffhiR5ILl5sLAAAAAACwuVb6m1Kf\n3oy5X1jim0wXD+1eC67fe2g/ucS6Zyd58CbW9GO5dafVxxcb0N23VNWZSR6f5LAp65w7Ze7NVfWd\nJD+SH362w5PskElg9blJNraonYf2ztMGAAAAAAAArAYrHUpduhlzr16i74ah3X7B9dsP7SVLzP3m\nJleU7Dvv98VTR02+DbVw/Hwb+2z7D21lElgtZ5cNGAMAAAAAADAzKx1KrV/h9VaTnZJcuYXuNfda\nxSu7e88tdE8AAAAAAIDRrPQ3pba0y4Z2/yXGLNW3nPk7vw5cYtydFhm/Oea+y7V7Ve2x5EgAAAAA\nAICtwNYeSv3L0B65xJgHLNF3y9BO+2jTV5NcMfxe9LtUVXW7JEcNp+cvca+N8Zkk3x/qOnaF1gQA\nAAAAAJiZrT2U+sDQHl9VBy/srKojMiVMGlw1tIu+Iq+7O8n7h9OnV9Vi32769SQHJOkk712+5OV1\n99VJ3jec/n5V7TZtbFWtqaq1K3FfAAAAAACAsWztodQ7k3w5yc5J/r6q7pckNXFskr/K0t+B+reh\nPbGqdpoy5g+SXJvkjkk+VFU/Odxjx6p6SpJXD+Pe1N1f2ayn+WHPSXJ5krskOaeqjq2q7Yd7V1Ud\nUlW/neTCJPdZwfsCAAAAAACsuK06lOruG5KckMkr9n4yk/Dm6kxCpA8nuSbJC4fhNy6yxJuG9oQk\nV1bV16tqXVW9e949vpLkxCQ3ZPKavgur6ntJrk7yhiQ7JvnHJM9Y4Wdbl8mr+76Z5O7D81xbVZcN\ntfx7kj9O8uOZ7NICAAAAAABYtbbqUCpJuvtfk9wzyVuSfCvJ9kP78iT3za3fjbpikblnJPmlJGcl\nuT6T1/AdlGS/BeP+JslPJ3ljknVJdklyXZJPJvlvSR7a3deu7JMl3X1ukkOTPDvJOZmEbHsO9/5M\nJru0HtTdZ630vQEAAAAAAFbSmpVYpLsP3tQx3X1aktOWmXtKklOW6L8oya8t1ldVhw8/vzhl7l9l\n8pq/JXX3lzMJoDZYdx+1AWMOXqb/6iR/NBwAAAAAAABbpa1+p9RSqurHkhw/nP7DLGsBAAAAAADY\nlm31oVRVPaqq/qCqfqqqth+u7VhVj0pyRpKdk/xTd39qpoUCAAAAAABsw1bk9X0ztm+S5w7HLVV1\nRZLdc+uz/WeSJ8yoNgAAAAAAAHIb2CmV5GNJXpzk00m+lWRtkuuSnJ/Jd6ju1d1fmVl1AAAAAAAA\nbP07pbp7XZLfmXUdAAAAAAAATHdb2CkFAAAAAADAKieUAgAAAAAAYHRb/ev7AABY/c49+ehZlwAA\nAADMmJ1SAAAAAAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigFAAAA\nAADA6IRSAAAAAAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxuzawLAADgtu+IU8+YdQkAADC6c08+etYl\nAKxqdkoBAAAAAAAwOqEUAAAAAAAAoxNKAQAAAAAAMDqhFAAAAAAAAKMTSgEAAAAAADA6oRQAAAAA\nAACjE0oBAAAAAAAwOqEUAAAAAAAAoxNKAQAAAAAAMDqh1CaoqtOqqqvqlFnXAgAAAAAAsDVYM+sC\n2HBV9egk90pyZnefOeNyAAAAAAAANphQauvy6CRPGn6fOcM6AAAAAAAANorX9wEAAAAAADA6oRQA\nAAAAAACjE0rNU1V3rarXVdW/V9V1VXVFVX2uql5dVYdPmbNdVT2jqi4Y5lxeVX9bVfdZYvzDqur1\nVXVeVX27qm6qqm9W1Qeq6uhF5hxVVZ1bX933gqrq+cfK/RUAAAAAAABWnm9KDarqt5K8Isl2w6Vr\nk3SSuw/HPZIctWDamiQfSvLQJDcnuTHJXkl+IclDquro7v70gjl3TfJ3886vSnJTkv0z+WbUo6vq\ned39knljbkry7SR7JNlpqO2aTX1WAAAAAACALc1OqSRVdUKSV2cSSJ2e5G7dvba790qyT5InJDlv\nkam/meSIJI9Nsra7d0tyzySfzyQ8etUic25K8uZMgqw9unuP7l6b5EeSPD/J+iQvrqr/Mjehu8/p\n7v2S/OVw6WXdvd/8YzP/BAAAAAAAAKPa5ndKVdX2meyQSpJ3dffj5vd39+VJ3jEcC+2Z5AHd/cl5\n4z9bVScl+UySI6rqwO6+aF7/vyd58sKFuvs7SV5UVZXk95P8RpJ/3pxnm6+qFgvVkuTQlboHAAAA\nAADANHZKJQ9JckAmO5RO3si5Z88PpOZ093lJvjGc3n0j1/ybob3/Rs4DAAAAAABYtbb5nVJJfmZo\nL+juizdy7rlL9F2c5E6ZfGPqh1TVzpnshHpUkrsNYxb+L+64kbUsqbsPX+z6sIPqsJW8FwAAAAAA\nwEJCqcm3nJLkoiVHLe7qJfpuGNrt51+sqv2TnJnkLvMuX5vke0luyeS7VrdPsusm1AMAAAAAALAq\neX3flvfKTAKpryY5Psne3b22u+/Q3fvl1p1bAAAAAAAAtxl2SiXfHtqDxr5RVe2QySv7kuTx3f1P\niwz7kUWuAQAAAAAAbNXslErmgqF7VNUBI9/r9kl2HH7/y5QxP7fE/FuGtlasIgAAAAAAgC1AKJX8\nY5KLM/mW06kj3+vqJD38/umFncP3pn5riflXDe2eK1wXAAAAAADAqLb5UKq7b07yrOH0xKp6T1Ud\nOtdfVXtX1VOq6tUrcK+rc+vOrDdX1b2Ge9yuqh6S5KwsvQvq34b22CHAAgAAAAAA2Cps86FUknT3\nX2YSTN2S5IQkX6yqq6vqe0m+m+QNSe6xQrd7ZpLrM9kp9S9VdU2Sa5J8LMk+SZ68xNwPJLk8yV2S\nfKOqLqmqdVW1boVqAwAAAAAAGIVQatDdL09y7yRvSbIuyfaZvGrvs0lelUmYtBL3+eck90vyV0m+\nN9znO0len+ReSS5YYu5lSR6c5P1JLk2yb5KDhgMAAAAAAGDVWjPrAlaT7v5skl/bgHEnJTlpmTFH\nLdF3QZJfWmL61Ff4DTUev2SBAAAAAAAAq4ydUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigF\nAAAAAADA6IRSAAAAAH4zfE0AACAASURBVAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxuzawLAADgtu/c\nk4+edQkAAADAjNkpBQAAAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA\n6IRSAAAAAAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDo1sy6AAAAbvuOOPWMWZcAAACj\nOffko2ddAsBWwU4pAAAAAAAARieUAgAAAAAAYHRCKQAAAAAAAEYnlAIAAAAAAGB0QikAAAAAAABG\nJ5QCAAAAAABgdEIpAAAAAAAARieUAgAAAAAAYHRCKQAAAAAAAEYnlFohVXVSVXVVnTnrWgAAAAAA\nAFYboRQAAAAAAACjE0qtnCuTfCnJRbMuBAAAAAAAYLVZM+sCbiu6+wNJPjDrOgAAAAAAAFYjO6UA\nAAAAAAAY3cxDqaraoaqeXlXnVNUVVXVzVX27qi6oqtdU1f2mzHlaVZ1dVZdX1Y1V9Z9V9eaquuuU\n+5xWVV1Vp1TVjlX1f6rqs1V19XB9v6q6avj9iGVqvnAY9/R5104arp25xLy7VtXrqurfq+q64Xk/\nV1WvrqrDp8zZt6peMoy7pqqurarPV9WLq2rvpeoEAAAAAABYLWb6+r6qWpPko0keNFzqTL7NtE+S\nOyS5x/D70/Pm7J/kw0nuOVy6Jcm1SQ5M8l+TnFhVj+/u90+57U5JPpHkvkluTnLdcP2GTF6/98Qk\nj0vyt1NqPizJTyZZn+QvN+JZfyvJK5JsN1y6dnjeuw/HPZIctWDOkUn+Oslc+HTT8Lw/NRy/WlXH\ndPeXNrQOAAAAAACAWZj1TqnHZRJIXZfkV5Ps0t17JdkxyUFJnpbkgrnBVbV9JiHNPZP8Y5KfTbJT\nd++e5I5JXplJ6PQXVfXjU+75m0nukuRXkqzt7j2THJxJSPTOYcwvVtUuU+afOLQf7+5vbchDVtUJ\nSV6dSSB1epK7dffa4Vn3SfKEJOctmHNQkr/JJJD6sySHJNk5ya5JfjqTMO/OSd5fVdsFAAAAAABg\nFZvpTqkkPzO0b+vut89d7O71SS5K8poF45+U5IgkZyd5WHffPG/OJUmeWVU7J3lqkmdmEmottDbJ\nQ7v7o/Pm/meSVNXHknwnk11aj0ryrvkTq6qSPHY4fWc2wBCkvWI4fVd3P25+f3dfnuQdwzHfi5Ps\nmeSl3f3cBX2fr6pHJjk3kx1Wv5RJ2LVUHedN6Tp02YcAAAAAAADYTLPeKXXV0O6/geOfNLSvmh9I\nLTAX7hwzpf+z8wOp+YYw7D3D6YmLDDkyk91JNySZ9nrAhR6S5IBMXvd38oZMGHZpnZDJq/pePqXW\nm3JrEDXtWQEAAAAAAFaFWe+U+nCSZyd5VFV9MMlpSc7q7u8uHDh8f+q+w+nrq2rhLqo5c6+yu/OU\n/k9PuT7nnZnssHpoVe097GSaM7fL6e+6+8pl1pkztxvsgu6+eAPnHJ5kh0y+OfW5yQatRe08tNOe\n9Qe6+/DFrg87qA7bwLoAAAAAAAA2yUxDqe4+q6p+N8nvJnnkcKSqLkzyoSSv7+7/GIbvnUlQk0y+\nw7Scnadcv3SZmj5dVV9L8qNJjk/yxqGmNUkeMwzboFf3DX5kaC/aiDlzO8dq3vylTPv+FQAAAAAA\nwKow69f3pbtfmOQuSZ6b5COZvNLv0CTPSvKFqnriMHR+rffu7lrumHLL9RtQ1ty3pOZ//+mYJLcf\n6vvQhj3dJpt71is35Dm7+6iR6wEAAAAAANgsMw+lkqS7v9bdL+3uYzPZEfXgJJ/IZCfXa6vqDkm+\nm1sDpQNHLmluJ9QDq+qOw++5b0y9v7tv2Ii1vj20B23CnN2rao+NmAcAAAAAALAqrYpQar7uXt/d\nZyZ5RJKbk+ya5D7dfXOSzwzDHjZyDf+W5LOZ/H1+pap2SvLooXtjXt2XJP80tPeoqgM2cM5nknw/\nk9f3HbuR9wMAAAAAAFh1ZhpKVdUOS3TflFt3Ru04tKcN7UlVdc9l1t5r86r7Qfh0YibfutotybeS\nnLGR6/xjkouTbJfk1A2Z0N1XJ3nfcPr7VbXbtLFVtaaq1m5kTQAAAAAAAFvUrHdKva2q3lJVD50f\nvFTVwUnemmSnJNcnOXvoelMmO492SnJGVT2lqnafN2+/qnp8VZ2V5OmbWdu7knSS+2TyvaskeU93\nb8g3qX5g2OH1rOH0xKp6T1UdOq/mvYfnePWCqc9Jcnkm39s6p6qOrarthzlVVYdU1W8nuXCoEQAA\nAAAAYNVaM+P775TksUlOStJVdWWSHZLsMvSvT/LU7r4smQQ8VfWoJO9Pcv8kb0jyuqq6IpPdVLvO\nW3tjdzT9kO6+qKo+leTIJPceLm/sq/vm1vrL4dV9pyY5IckJVXVNJq/o23MYdtaCOeuq6tgkf5Xk\n7kk+nOTmqroqk11b83eZ9abUBQAAAAAAsKXMOpR6TpJPJTk6ySFJ9s/kNXdfSfKJJK/s7s/On9Dd\n36mqB2USZj0+yeFJ9s7kdX8XJvm/Sf42yQdXoL53ZhJKJclXuvufN3Wh7n55VX0syTOSPDiTZ705\nk29XfTyTnWEL55w77Kr670keleSumYRYVw/zzknyvu7+xKbWBQAAAAAAsCVUt00227KqOu+www47\n7Lzzzpt1KQDAbdgRp27WJnYAAFjVzj356FmXADCaww8/POeff/753X345q41629KAQAAAAAAsA0Q\nSgEAAAAAADA6oRQAAAAAAACjE0oBAAAAAAAwOqEUAAAAAAAAoxNKAQAAAAAAMDqhFAAAAAAAAKNb\nM+sCAAC47Tv35KNnXQIAAAAwY3ZKAQAAAAAAMDqhFAAAAAAAAKMTSgEAAAAAADA6oRQAAAAAAACj\nE0oBAAAAAAAwOqEUAAAAAAAAoxNKAQAAAAAAMDqhFAAAAAAAAKMTSgEAAAAAADC6NbMuAACA274j\nTj1j1iUAALDKnXvy0bMuAYCR2SkFAAAAAADA6IRSAAAAAAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxO\nKAUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA6IRSAAAAAAAAjE4oBQAAAAAAwOiEUhugqtZVVVfV\nUZsw95Rh7mkruS4AAAAAAMDWRCgFAAAAAADA6IRS47ssyZeSXDLrQgAAAAAAAGZlzawLuK3r7j9N\n8qezrgMAAAAAAGCW7JQCAAAAAABgdKsulKqqdVXVVXVUVe1fVa+rqq9X1fVV9cWqemZV3W7e+BOq\n6uyquqKqrqqqD1XV3ZdY/95V9fZhzRur6rKq+khVHb+B9R1YVX8+zL+hqr5WVS+rqj2mjD9leJ7T\nNuFvsUNVPW14vsuHev+zqt5cVXfd2PUAAAAAAABmZdWFUvP8aJLzkzw1ye5Jtk9yaJKXJ3lVklTV\nS5O8J8n9MnmW3ZI8PMnZVXXIwgWr6r8l+UySxye5U5LrkuyZ5OeTnF5Vf1FV2y1R008M8588zOsk\nByd5VpLPVNX+m/XEP1zr/kn+b5I/SXJkkj2S3JjkwCT/Ncn5VXXcSt0PAAAAAABgTKs5lHpFkq8l\nuWd375FJMPX8oe83q+p5SX47yTOS7NHduyf56SRfyiQwevH8xarqZ5P8WSbPfHqSO3f3XsPY38kk\nYHpCkucuUdPLklyZ5AHdvVuSXZM8OsllmQRWb93MZ56rdfskf53knkn+McnPJtlpeMY7Jnllkp2S\n/EVV/fhK3BMAAAAAAGBMqzmUuiXJw7v7s0nS3dd194uSnJGkMgmdXtTdr+rua4cxn0/ylGH+L1bV\nDvPWe2Emz/upJL/S3d8Y5lzT3S9O8tJh3LOravcpNe2Y5GHd/clh7i3d/ddJfnnoP6aqjtzsJ0+e\nlOSIJGcP9/t0d9883POS7n5mktcn2SXJMzdkwao6b7Ejk91nAAAAAAAAo1rNodTruvuKRa5/bGhv\nyuRVfgt9KskNmQRIP5EkVbV3kgcP/S/p7vWLzPvDYd7aTF4BuJj3dPeXF17s7o8nOWc4fcyUuRvj\nSUP7qrkwahHvGNpjVuB+AAAAAAAAo1oz6wKW8Lkp178ztOu6+5qFnd19S1Vdlsk3o/YaLt87k91V\nneSsxRbt7iuHnUP3T3JYkncvMuzMJeo9K5PX7B22xJhlVdWaJPcdTl9fVa+ZMnTu21d33pB1u/vw\nKfc7L5tZMwAAAAAAwHJWcyh1yZTr65fpnz9m+6Hdd2ivXCzImucbC8YvdPESc+f6ps3dUHsnmXvt\n4D4bMH7nzbwfAAAAAADA6Fbz6/vGsOOsC9gA8/8n9+7uWu6YWaUAAAAAAAAbaFsJpS4d2p2raqmd\nTHdaMH6hOy4xd65v2twN9d3cutPrwM1cCwAAAAAAYFXYVkKpf8nke1JJ8uDFBlTVHknmvrt0/pR1\nHrTEPeb6ps3dIN19c5LPDKcP25y1AAAAAAAAVottIpTq7suTfHw4fXZVLfbcz06yU5JrkvzdlKUe\nW1U/tvBiVT0wyf2H0/duZrlJctrQnlRV91xqYFXttQL3AwAAAAAAGNU2EUoNnp/kliSHJXl3Vd0p\nSapqbVU9L8lzhnEv7e6rpqxxU5IPV9XPDnNvV1WPTHL60P8P3f2pFaj1TUn+KZOQ7IyqekpV7T7X\nWVX7VdXjq+qsJE9fgfsBAAAAAACMapsJpbr7nCT/I5Ng6oQkF1XV5UmuSPLiJJXkHUleusQy/yvJ\nXkk+VVVXZ7Kr6oNJ9k3y5SRPWqFab07yqCSfSrJ3kjck+V5VfbeqrklySZK3J3lgbn0tIQAAAAAA\nwKq1zYRSSdLdr09yRJJ3ZhLsrE1yZZJ/SHJCdz+hu9cvscSXk9wnyZuHedslWZfkj5Pcp7svWcFa\nv5PJd6oen8nrBC9NstvQfWGStyX55SwdogEAAAAAAKwKa2ZdwELdffAy/afl1m8ubfQa3X1+JkHP\n5tT05I2Ye0qSUzZw3YX96zMJ0N65ofcDAAAAAABYjbapnVIAAAAAAADMhlAKAAAAAACA0QmlAAAA\nAAAAGJ1QCgAAAAAAgNEJpQAAAAAAABidUAoAAAAAAIDRCaUAAAAAAAAYnVAKAAAAAACA0a2ZdQEA\nANz2nXvy0bMuAQAAAJgxO6UAAAAAAAAYnVAKAAAAAACA0QmlAAAAAAAAGJ1QCgAAAAAAgNEJpQAA\nAAAAABidUAoAAAAAAIDRCaUAAAAAAAAYnVAKAAAAAACA0a2ZdQEAANz2HXHqGbMuAQCATXDuyUfP\nugQAbkPslAIAAAAAAGB0QikAAAAAAABGJ5QCAAAAAABgdEIpAAAAAAAARieUAgAAAAAAYHRCKQAA\nAAAAAEYnlAIAAAAAAGB0QikAAAAAAABGJ5QCAAAAAABgdEIpAAAAAAAARrdNhFJVtVtV/WJVvbCq\nPlxVl1VVD8ehy8ztDTges8wa96mqd1fVN6vqhqq6qKr+vKp+YmWfFAAAAAAAYHVaM+sCtpCHJPnA\nZq5xWZL1U/pumDapqp6U5M8z+Vt3kquS3DnJk5P8SlX9YnefsZm1AQAAAAAArGrbSiiVJN9J8pkk\n5ya5OMkbNnL+Ed29bmMmVNU9krwxk7/zO5I8s7svraqDhuvHJHlfVd2luy/dyHoAAAAAAAC2GttK\nKPU33f1XcydVdfAWuu/vJ9k+kzDsSd29Pkm6+z+r6rgkX8hk19RzkjxrC9UEAAAAAACwxW0T35Sa\nC4O2pKraM8nDh9OXL6yhu69J8rrh9MSqqgXzzxy+V3VSVe1VVa+oqq8O36T6RlW9oar2H/9JAAAA\nAAAANt82EUrNyJGZ7JJKko9OGfORod0/yV2njNknk1cOPiPJfkm+n+SAJE9JckFVTZsHAAAAAACw\nagilNtx7qup7VXXjsFPpfVX1C0uMv9vQfqu7vztlzBcWGb/Q85PsluSRSdZ299okRyX5WpJ9k7y3\nqrafMhcAAAAAAGBV2Fa+KbUSjkhydZKbM9mpdFyS46rqvUme0N03LRg/92q9b05bsLuvr6orkuw5\nb/xCuyd5YHd/ct68s6rqYUk+m+Snkjw2yduXKr6qzpvSdehS8wAAAAAAAFaCnVLLe2uSY5Ps1d27\nDzuV7prkLUP/CUn+dJF5uw7t9cusf93Qrp3Sf/b8QGpOd38pyenD6WOWuQcAAAAAAMBM2Sm1jO4+\naZFrFyb5taq6LMnJSX69qv54CIpW2plL9J2V5HFJDltuke4+fLHrww6qZecDAAAAAABsDjulNs/v\nZbITqpI8YkHftUO78zJr7DK010zpv3iJuXN9+y5zDwAAAAAAgJkSSm2G7r42yeeH0x9b0D33Lak7\nTptfVTtn8j2pJLlkZasDAAAAAABYPYRS4/nC0O5XVftMGXO3RcYvNDXUmtd36cYUBgAAAAAAsKUJ\npTZDVe2a5O7D6dcWdH8yyc3D75+bssTPD+03k3xxypgHLVHCXN/5S4wBAAAAAACYOaHUEqqqlhny\n/Ey+GdVJ/m5+R3dfOe/ab1fVD/2th0DrN4bTd3V3T7nHg6rqZxep7ZAkjxlO37tMnQAAAAAAADO1\nzYRSVXX7uSPJXvO69pzftyA8ek9Vvbiq7lNVO8xb6yer6o1Jnj1cemt3L/b6vRdkslvqvklOG+6d\nqjowyfuTHJjkiiR/uETpVyV5f1U9fC4kq6oHJPlwkh2T/FuS92zwHwIAAAAAAGAG1sy6gC1o2neX\nPr3g/EeTrBt+75vJbqTnJVlfVVdmEgTtOm/86bl1x9MP6e4LquopSf48ya8meUJVXZVkj2HItUmO\n7+6lvgn1wiT/PcmHklxfVeuTrJ33TL/c3TdPmwwAAAAAALAabDM7pTbRHyT5kyTnJvlOJmHU7TL5\nftS7kjy0u0/o7hunLdDdb01yv0x2M307k9f9fT3Jm5Pcq7vPWKaG72ay0+qVw/wdMvkG1RuH+Yvt\n0AIAAAAAAFhVtpmdUt293PehFpvz0SQfXYF7fybJYzdj/uVJnjkcAAAAAAAAWx07pQAAAAAAABid\nUAoAAAAAAIDRCaUAAAAAAAAYnVAKAAAAAACA0a2ZdQEsrruPmnUNAAAAAAAAK8VOKQAAAAAAAEYn\nlAIAAAAAAGB0QikAAAAAAABG55tSAACM7tyTj551CQAAAMCM2SkFAAAAAADA6IRSAAAAAAAAjE4o\nBQAAAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA6IRSAAAAAAAAjG7N\nrAsAAOC274hTz5h1CQAA26RzTz561iUAwA/YKQUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA6IRS\nAAAAAAAAjE4oBQAAAAAAwOiEUgAAAAAAAIxOKAUAAAAAAMDohFIAAAAAAACMTigFAAAAAADA6IRS\nAAAAAAAAjG7NrAvYGlXVwUlOSnJFd79ypsUAAAAAAABsBeyU2jQHJ3lBkmfMuA4AAAAAAICtglAK\nAAAAAACA0QmlAAAAAAAAGJ1QalBVO1TV06vqnKq6oqpurqpvV9UFVfWaqrrfMG5dko8P0w6qql5w\nnLTI2sdV1d9X1aVVdWNVfaOq3lFVh02p5eC59Ybz+1fV3w7zr6uqf62qp1WV/x8AAAAAALBVWDPr\nAlaDqlqT5KNJHjRc6iRXJtknyR2S3GP4/ekklybZPcleSW4Zzue7ft66t0vyliRPHC6tT3J1kgOS\nPC7Jr1TV07r7z5ao7fgk787kf3VFku2T3DPJnyR5SFWd0N3f36QHBwAAAAAA2ELstJl4XCaB1HVJ\nfjXJLt29V5IdkxyU5GlJLkiS7j4iyXHDvK93934Ljr+ct+7/ziSQ6iTPT7LXsO6dkrw3k7//n1bV\nA5eo7U1JPpbkx4a5ew7r3pLk0cNvAAAAAACAVc1OqYmfGdq3dffb5y529/okFyV5zcYuWFVrkzx3\nOP3D7n7RvHUvrqoTk+yf5MgkL0oyLZj6RpJHd/eNw9xrk5xaVbsmeUGSZ1fVK7v7umXqOW9K16Eb\n+kwAAAAAAACbyk6piauGdv8VXPOYTF7zd1OSP1rYOQReLxxOH1BV+01Z54/nAqkFXp7khuEeP7/5\n5QIAAAAAAIxHKDXx4aF9VFV9sKqOq6p9NnPNw4b2gu7+3pQxn8jkO1Pzxy905mIXu/uqJP+yzNz5\n4w9f7Ehy4XJzAQAAAAAANpdQKkl3n5Xkd5N8P8kjk7wvyWVV9cWqellVHbIJy+47tBcvcd8bkly2\nYPxCU+fP65s2FwAAAAAAYFUQSg26+4VJ7pLJd6A+kskr/Q5N8qwkX6iqJ27i0jutTIUAAAAAAABb\nL6HUPN39te5+aXcfm2TvJA/O5BV7a5K8tqrusBHLXTq0B04bUFU7JZl7TeClU4bdcYl7zPVNmwsA\nAAAAALAqCKWm6O713X1mkkckuTnJrknuM3TfMrS1xBLnD+0hVXXAlDEPzCTwmj9+oQctdrGqdsut\n35KaNhcAAAAAAGBVEEolqaodlui+Kcn64feOQ3vV0O6xxLyPDuO2T3LyIvfcLsnzh9Ozu/tbU9Z5\n1pT6npHJqwGvGu4FAAAAAACwagmlJt5WVW+pqocOO5CSJFV1cJK3ZhL+XJ/k7KHrPzLZPbVHVR2/\n2ILdfW2SPxhO/2dV/Z+qWjuse0CSdyU5MpNdV7+zRG0HJvnAUEuqapeqelaSU4b+P+zu6zbmYQEA\nAAAAALa0NcsP2SbslOSxSU5K0lV1ZZIdkuwy9K9P8tTuviyZBE5V9a4kT0xy+jD+imHs/+ru04ff\nL0tyt2Hci5L8XlVdlWTPTF79d0uS3+ruTyxR25OTvDvJ16rqiiRrc+v/7a+T/NHmPDgAAAAAAMCW\nYKfUxHOS/O8kf5/kq5kEUtsl+UqStyQ5rLv/YsGc30jykiQXZvJav4OGY+3cgOG7VE9K8phMXrE3\nFypdkslOqft292uXKqy735fkwUk+lEk49v0kFyT5rSTHdff3N/mpAQAAAAAAthA7pZJ094WZhEun\nbsSc65M8bziWG/u+JO/bjPo+meQRmzofAAAAAABg1uyUAgAAAAAAYHRCKQAAAAAAAEYnlAIAAAAA\nAGB0QikAAAAAAABGt2bWBfD/6+51SWrWdQAAAAAAAKwUO6UAAAAAAAAYnVAKAAAAAACA0QmlAAAA\nAAAAGJ1vSgEAMLpzTz561iUAAAAAM2anFAAAAAAAAKMTSgEAAAAAADA6oRQAAAAAAACjE0oBAAAA\nAAAwOqEUAAAAAAAAoxNKAQAAAAAAMDqhFAAAAAAAAKMTSgEAAAAAADC6NbMuAIDp3rXjXWddAsCK\nOPHGL866BAAAAGDG7JQCAAAAAABgdEIpAAAAAAAARieUAgAAAAAAYHRCKQAAAAAAAEYnlAIAAAAA\nAGB0QikAAAAAAABGJ5QCAAAAAABgdEIpAAAAAAAARieUAgAAAAAAYHRCKQAAAAAAAEYnlFoFqurM\nquqqOmnB9YOH673InFOGvtO2VJ0AAAAAAACbas2sC7gtq6qDk5yU5IrufuVMiwEAAAAAAJghO6XG\ndXCSFyR5xjLjLkrypSRXjl0QAAAAAADALNgptQp09xNnXQMAAAAAAMCY7JQCAAAAAABgdEKpjVRV\nO1TV06vqnKq6oqpurqpvV9UFVfWaqrrfMG5dko8P0w6qql5wnDRvzTMXXgMAAAAAALgt8fq+jVBV\na5J8NMmDhkudyXeg9klyhyT3GH5/OsmlSXZPsleSW4bz+a7fAiUDAAAAAACsCkKpjfO4TAKp65I8\nNcnp3X1DVW2X5IAkj8wkiEp3H1FVR2WyW+rr3X3wTCoeVNV5U7oO3aKFAAAAAAAA2ySh1Mb5maF9\nW3e/fe5id69PclGS18ykKgAAAAAAgFVOKLVxrhra/WdaxSbo7sMXuz7soDpsC5cDAAAAAABsY243\n6wK2Mh8e2kdV1Qer6riq2memFQEAAAAAAGwFhFIbobvPSvK7Sb6fyfej3pfksqr6YlW9rKoOmWmB\nAAAAAAAAq5RQaiN19wuT3CXJc5N8JJNX+h2a5FlJvlBVT5xheQAAAAAAAKuSUGoTdPfXuvul3X1s\nkr2TPDjJJzL5Rtdrq+oOMy0QAAAAAABglRFKbabuXt/dZyZ5RJKbk+ya5D5D9y1DWzMoDQAAAAAA\nYNUQSm2Eqtphie6bkqwffu84tFcN7R6jFQUAAAAAALAVEEptnLdV1Vuq6qFVtdvcxao6OMlbk+yU\n5PokZw9d/5HJ7qk9qur4LVwrAAAAAADAqiGU2jg7JTnp/7F370GenWWdwL8P6ZCEhDCJXBIIEBax\nkhUEEkTlZhZBEC8IkpKRRXELkQXUIMvNlUtp0HCRCOwKiq6R1Y0XcIEsxX3FIIEqDCqJDpTrJggR\nAgm5XyZk8uwffcbtNN0z09P99ulJPp+qU+/vnPc973l+3X9+6zm/JB9IclVVXVFV1yW5KMmPZ7FT\n6me7+7Ik6e7rkpw93fvOqrqyqi6ejqdtfvkAAAAAAADzWJi7gAPMy5J8IsljkzwgybFJDkryT0nO\nTfKb3f3ZZfc8N8klSZ6S5Pgk952uH7EJ9QIAAAAAAGwJQqk16O7PJflcktev4Z4bkvzSdKy25pRV\nrl+cpFaZe3WSV+9rHQAAAAAAAHPy+j4AAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAA\nDCeUAgAAAAAAYDihFAAAAAAAAMMJpQAAAAAAABhuYe4CAFjd9p075i4BAAAAAGBD6JQCAAAAAABg\nOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDh\nhFIAAAAAAAAMtzB3AQB8s7MPOXHuEgA21PadO+YuAQAAAJiZTikAAAAAAACGE0oBAAAAAAAwnFAK\nAAAAAACA4YRSAAAAAAAADCeUAgAAAAAAYDihFAAAAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwQikA\nAAAAAACGE0oBHWnkAgAAIABJREFUAAAAAAAwnFBqi6qq46uqq6rnrgUAAAAAAGC9hFIAAAAAAAAM\nJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4otYmq6qlV9YGq+lpV7ayqL1XVH1XVSfux14Or\n6tKq6qr6w6paGFEzAAAAAADARhBKbYKqukNV/UGSdyV5QpKjklyf5F5JfiLJp6vqP65hv0ck+ViS\nuyd5a5JndvfNG103AAAAAADARtFdszlekuQnk3SSVyZ5U3dfU1X3SnJmklOT/Jeq+vvuPndPG1XV\n9yf5n0nulOS13f2yfSmgqs5fZeqEffwOAAAAAAAA+02n1GBVdUSSl0+nr+3u07v7miTp7kuSbE/y\nV1n8X5y+l72emuScLAZSL9/XQAoAAAAAAGBuOqXGe3ySI5PclOR1yye7e1dV/WqSDyZ5dFUd091f\nWb6uqn46yduzGF49r7vfupYiuvvkla5PHVRr/k0rAAAAAACAtdApNd7uwOfvuvuKVdacm2TXsvX/\nqqpOS/J7WXz930+uNZACAAAAAACYm1BqvLtN4yWrLejuG5Nctmz9UmcmqSS/0t1/uLHlAQAAAAAA\njCeU2jyHruPeP57G/1RVD9+IYgAAAAAAADaTUGq8r03jfVZbUFWHJvmWZeuXemaSP8/ib1N9sKoe\nuqEVAgAAAAAADCaUGu8z0/iAqrrXKmsek2Rh2fp/1d03J3l6knOSbEvy4ap60EYXCgAAAAAAMIpQ\narwPJbk6ycFJXrx8sqoOSvKK6fTj3f2VlTbp7m8kOTXJB7LYVfWRqjpxSMUAAAAAAAAbTCg1WHdf\nl+TXptOfr6r/XFVHJMnUOXV2kkcluSXJL+9lr51JnpLko0nunuSjVfWAUbUDAAAAAABsFKHU5nhD\nknckqSSnJ7myqr6e5ItZ7H66JcnPdfe5e9uou29M8iNJ/jLJsUn+d1Xdb1ThAAAAAAAAG0EotQm6\ne1d3/1SSp2XxdX5XJjkiyZez2Cn18O7+rTXsd32SH0ryiSTHJfmLqrrPhhcOAAAAAACwQRbmLuD2\npLvfleRd+7j24ix2Vq02f20WX/sHAAAAAACw5emUAgAAAAAAYDihFAAAAAAAAMMJpQAAAAAAABhO\nKAUAAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4RbmLgCAb7Z95465SwAAAAAA2FA6\npQAAAAAAABhOKAUAAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAADCeU\nAgAAAAAAYDihFAAAAAAAAMMtzF0AwO3J2YecOHcJALPYvnPH3CUAAAAAM9MpBQAAAAAAwHBCKQAA\nAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAA\nAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAA\nAABgOKEUAAAAAAAAwwml1qmq7lpVz6uq91TV56rqmqq6rqr+oareWFX3XOGe46uqq6qn84dP939t\nuv+8qnrSkvV3rKqXVtWFVXV9VV1aVb9dVUdv5ncFAAAAAADYXwtzF3Ab8LIkL5o+35zk6iR3SXLi\ndPz7qnpcd392pZur6slJ/iyL/4urkxyR5HuSnFNVT09yTpL3JzklyY1JOsndkzwnyXdW1Xd3901j\nvhoAAAAAAMDG0Cm1fv+c5JeSfEeSw7r7W5IckuRhST6Y5G5J/kdV1Sr3/0GSdyQ5tru3ZTFwek8W\n/zdnJnlDkhOS/FAWA6s7J3lykmuSPDTJs/elyKo6f6Vj2hsAAAAAAGAoodQ6dfebu/vXu/uC7r55\nuraru8/PYnj0D0m+PcljVtniM9397O6+dLr3a0mekcWuqXsleX6SH+/u90377uru9yZ5/XT/08Z9\nOwAAAAAAgI0hlBqou3cm+fB0+shVlp2xwn3XJfnUdHped5+7wn0fncYH7mMtJ690JPncvtwPAAAA\nAACwHn5TagNU1QlJXpDFbqjjs/iaveWv67vnKrdfsMr1r07jhavMXzqNR+1blQAAAAAAAPMRSq1T\nVT09i78JdfB06ZYkVyXZOZ0fkeTw6fgm3f3lVbbeNY17m/c/BAAAAAAAtjyv71uHqrpbkrdnMZD6\nkyQPS3Jodx/V3cd09zFJzty9fKYyAQAAAAAAZqfLZn1+IIudUP+Q5Ce6+5YV1txjc0sCAAAAAADY\nenRKrc9x0/jZlQKpqqokj93ckgAAAAAAALYeodT6XDWND5wCqOV+Jsn9N7EeAAAAAACALUkotT4f\nSdJJHpjkzVW1LUmq6siqenGS/5rk8hnrAwAAAAAA2BKEUuvQ3Z9P8pvT6QuSXFFVVyS5Isnrknw0\nydtmKg8AAAAAAGDLEEqtU3f/YpLnJPmbJDuTHDR9Pi3JDya5eb7qAAAAAAAAtoaFuQu4Lejutyd5\n+yrTr56OpesvTrLSb1AtXfOsJM/aw/xe9wAAAAAAANgqdEoBAAAAAAAwnFAKAAAAAACA4YRSAAAA\nAAAADCeUAgAAAAAAYDihFAAAAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwC3MXAHB7sn3njrlLAAAA\nAACYhU4pAAAAAAAAhhNKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4oRQAAAAA\nAADDCaUAAAAAAAAYTigFAAAAAADAcEIpAAAAAAAAhluYuwCAA9XZh5w4dwkAB4ztO3fMXQIAAAAw\nM51SAAAAAAAADCeUAgAAAAAAYDihFAAAAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwQikAAAAAAACG\nE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAADCeU2gdVdXFVdVWdssb7zprue/WYygAAAAAAAA4M\nQikAAAAAAACGE0qN9eUkn09y2dyFAAAAAAAAzGlh7gJuy7r75UlePncdAAAAAAAAc9MpBQAAAAAA\nwHBCqTWqqqOr6o1VdVFV7ayqS6rq7VV17Aprz6qqrqpXL7t+/HS9p/OHV9V7quprVXVNVZ1XVU9a\nsv6OVfXSqrqwqq6vqkur6rer6ujhXxgAAAAAAGADCKXW5rgkn0nywiR3T9JJ7pnk2UnOq6qj1rph\nVT05yV8l+eEkByc5Isn3JDmnqk6tqkOTfDDJGUnuP9129yTPSfKRqrrjur4RAAAAAADAJhBKrc1b\nklyR5BHdfXgWA6QnJ7kyyfHZv9+P+oMk70hybHdvy2Lg9J4s/m/OTPKGJCck+aHpeXeennlNkodm\nMRDbq6o6f6Vj2hsAAAAAAGAoodTa7EzyuO7+ZJJ0983d/d4kp0/zT9uPPT/T3c/u7kunPb+W5BlJ\nrk5yryTPT/Lj3f2+7t41He9N8vp1PBMAAAAAAGBTCaXW5ne6+/IVrr97Gu9XVYevcc8zll/o7uuS\nfGo6Pa+7z13hvo9O4wP35SHdffJKR5LPrbFeAAAAAACANRNKrc2nV7l+yZLP29a45wWrXP/qNF64\nyvyl07jm37ECAAAAAADYbEKptblmpYvdfeOS04PXsmF3f3mVqV3TuLf5hbU8DwAAAAAAYA5CKQAA\nAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAA\nAAAAGG5h7gIOBN19/D6sqRWuPSvJs1a4fnGSb1q/L/euZQ8AAAAAAICtQqcUAAAAAAAAwwmlAAAA\nAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMtzB3AQAH\nqu07d8xdAgAAAADAAUOnFAAAAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwQikAAAAAAACGE0oBAAAA\nAAAwnFAKAAAAAACA4YRSAAAAAAAADCeUAgAAAAAAYDihFAAAAAAAAMMtzF0AwIHg7ENOnLsEgAPa\n9p075i4BAAAAmJlOKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABg\nOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIa73YZSVfWxquqqetbctQAAAAAA\nANzWLcxdwG1FVT0kyY8mubi7z5q5HAAAAAAAgC3ldtspNcBDkrwqybNmrgMAAAAAAGDLEUoBAAAA\nAAAwnFAKAAAAAACA4YRSy1TVSVV1RlX9VVX9c1XtrKrLq+pjVfXsqjpohXs6ye9Pp99bVb3sOGWF\nex5VVX9cVV9a8oyPVNX2qqoV1p8y7XXxdP4DVfX+qvpqVd1SVadt7F8CAAAAAABg4yzMXcAW9KEk\n3zJ9vn46jk7yvdPxlKp6cnffvOSeS5McluTIJN9I8vVle9609KSqXpvkJUsuXZ3kqCTfNx0/UlXP\n6O5bViqwql6U5A1JOslVSVZcBwAAAAAAsFXolPpmH0qyPcmx3X14dx+V5Igkz0zylSRPSvLCpTd0\n9zFJfmE6Pa+7j1l2nLd7bVX9QhYDqUuTPCfJtu6+S5LDkzx9esbTk7x0lfrukeS1SX5rqnF3fe9c\n/1cHAAAAAAAYQ6fUMt39Eytcuy7JH1bVF5Kcm+R5SV6/1r2raluS05PcmOQJ3f13S55xQ5I/qap/\nTvKJJC+uqt/o7puWbXNokrO7+/lL7r0xyZf28uzzV5k6Ya3fAwAAAAAAYK10Sq1Bd388yZVJjq+q\ne+7HFj+Wxa6mjywNpJY945NJLsri6/xOXmWfNQdiAAAAAAAAc9IptYKqOjXJM5KclORuWexOWu6e\nSf5ljVs/YhofW1Vf2cO6o6fx3kk+uWzuhiQrBlp70t0rBlxTB9VJa90PAAAAAABgLYRSS1TVQpI/\nTfKUJZd3Jrksya7p/G5Z7DA7fD8ecew03mk69malNZd39y378WwAAAAAAIDZeH3frf1MFgOp65P8\nfJJ7d/eh3X237j6mu4/J/++Oqv3Yf/ff+03dXftwnLXCHrtWuAYAAAAAALCl6ZS6tVOn8Ve7+y3L\nJ6vqoCR3Xcf+l07jfdaxBwAAAAAAwAFHp9StHTeNf7PK/COz8u9LJcnuV+rtqYNq9+9DnVJVh62x\nNgAAAAAAgAOWUOrWrprGBy2fmH5v6vQ93Hv1NG7bw5o/S3JdkqOSvHJPhVTVUXuaBwAAAAAAOJAI\npW7tw9P4iqp68vS6vlTVCUnOSfLwLIZKK/n7afy3VfVdKy3o7suTvHw6fVlVvb2qvm33fFUdVlWP\nrqq3Jjlvnd8FAAAAAABgyxBK3dobkvxTkiOTvDvJDVV1VZIdSR6f5LlJLlvpxu7+xyTnZvF3uj5V\nVZdX1cXT8d1L1r0lySuSdJJnJ/l8VV1bVV9Pcu20x3Oz+msCAQAAAAAADjhCqSW6++tJvjvJW5N8\nabp8QxYDqu/t7rP2ssVTk/xWkouSHJHkvtNxq4Cpu09P8uAkv5PkH7P4fzg8yZeTfDDJS5I8et1f\nCAAAAAAAYItYmLuAuXT3KatcvyzJ86Zjpfnj97Dn5Umev4/PvyDJz+7L2mn9x5LUvq4HAAAAAADY\nSnRKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAY\nTigFAAAAAADAcAtzFwBwINi+c8fcJQAAAAAAHNB0SgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAM\nJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADDc\nwtwFAGxFZx9y4twlANymbN+5Y+4SAAAAgJnplAIAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAYTigF\nAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4odRg\nVXVxVXVVnTJ3LQAAAAAAAHMRSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKHU\nJqqqo6vqjVV1UVXtrKpLqurtVXXsHu45vqreUlWfr6rrq+qaqjq/ql5aVYdvZv0AAAAAAAD7a2Hu\nAm5HjktyVpL7Jrk+SSe5Z5JnJ3lcVZ3U3VcsvaGqnprkj5IcOl26PskhSU6ajmdU1eO7+9JN+QYA\nAAAAAAD7SafU5nlLkiuSPKK7D09yRJInJ7kyyfFJXr50cVV9Z5I/zmJw+Jokx033HZbkEUn+OsmD\nkrxjk+oHAAAAAADYbzqlNs/OJI/r7suTpLtvTvLeqjo9yRuSPC3JS5asPzPJwUme292/vftid+9K\n8smqekKSC5N8f1U9rLv/ek8Pr6rzV5k6YX+/EAAAAAAAwL7SKbV5fmd3ILXMu6fxfrt/I6qq7p/k\nkVnsovq9lTbr7q8nef90+vgNrhUAAAAAAGBD6ZTaPJ9e5folSz5vS3JdFl/Plyy+4u9LVbXankdM\n47339vDuPnml61MH1Ul7ux8AAAAAAGA9hFKb55qVLnb3jUtCp4On8dhpXEhyj33Y+07rKw0AAAAA\nAGAsodTWtPu1in/X3Q+ZtRIAAAAAAIAN4DeltqZLp3Gvr+UDAAAAAAA4EAiltqZPTuPRVfVds1YC\nAAAAAACwAYRSW1B3fy7Jp6bT11XVwautrarDquqQzakMAAAAAABg/wiltq6fT7IzyWOSfLSqHlVV\nd0iSqjqoqh5UVa9M8n+THDtjnQAAAAAAAHu1MHcBrKy7P11VT0lydpJHJ/l4kp1VdW2SI5Ms7Z7q\nGUoEAAAAAADYZzqltrDufn+Sb0tyepLPZLFzaluSq5Ocl+SMJCd39xdmKxIAAAAAAGAf6JQarLuP\n34c1tYe5ryZ5xXQAAAAAAAAckHRKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4\noRQAAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcAtzFwCwFW3fuWPuEgAAAAAAblN0SgEAAAAAADCc\nUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBC\nKQAAAAAAAIYTSgEAAAAAADDcwtwFAN/s7ENOnLsEANhQ23fumLsEAAAAYGY6pQAAAAAAABhOKAUA\nAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAADCeUAgAAAAAAYDihFAAA\nAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAA\nAAAADCeUWqequriquqpOqapjq+ptVfXFqrqhqnZU1Qur6g5L1p9aVR+vqiur6uqqel9VPXAP+9+t\nqn69qi6oqmur6rqqurCqXlNVR2/OtwQAAAAAAFifhbkLuA25X5KzkxyT5OokByc5Ickbk/ybJD9X\nVWckeWmSXUmuT3LnJE9K8oiqenh3/+PSDavqUUnek2R3+HRTkluSfPt0PLOqHt/dnx/83QAAAAAA\nANZFp9TGOTPJRUke3N13SXJkkldMc8+vql9K8otJTktyl+4+MsmDknw+ybYkr1m6WVXdN8k5WQyk\n3prkAUkOS3L4dN+Hktw7yZ9X1UFjvxoAAAAAAMD66JTaOLckeVJ3X5kk3X19ktOr6t8leWwWQ6dX\ndfebdt/Q3RdW1c8kOTfJj1TVHbv7pmn6NVkMq87o7pcve9aFVfXDST6d5DuSPCXJO/dUXFWdv8rU\nCWv5kgAAAAAAAPtDp9TGedvuQGqZj0zjTVl8ld9yn0hyY5JDknxrklTVnZKcmsWga6V7MoVXu4Oo\nx+9/2QAAAAAAAOPplNo4F6xy/avTeHF3X7t8srtvqarLkhyX5Kjp8slJ7pikk1xQVas987BpvPfe\niuvuk1e6PnVQnbS3+wEAAAAAANZDKLVxvrzK9V17mV+65uBpPHYaK8k99uHZd9qHNQAAAAAAALMR\nSm1Nu1+reFV3b5u1EgAAAAAAgA3gN6W2pkun8ciqususlQAAAAAAAGwAodTW9NdJbs7i6/ueOHMt\nAAAAAAAA6yaU2oK6+5ok75pOf6Wq7rza2qpaqKojNqcyAAAAAACA/SOU2rpeluTrSb4tyXlV9cSq\nOjhJatEDquoXk3wuycNmrBMAAAAAAGCvFuYugJV198VV9cQk707ywCTvT/KNqro6yZ2T3HHp8hlK\nBAAAAAAA2Gc6pbaw7v50khOSvDTJeUmuTbItyfVZ/N2pNyf53u7+y9mKBAAAAAAA2Ac6pdapu4/f\ny/xZSc7a3z2m35d63XQAAAAAAAAckHRKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAA\nAGA4oRQAAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcEIpAAAAAAAAhluYuwDgm23fuWPuEgAAAAAA\nYEPplAIAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAA\nMJxQCgAAAAAAgOGEUgAAAAAAAAy3MHcBcFt29iEnzl0CAGwJ23fumLsEAAAAYGY6pQAAAAAAABhO\nKAUAAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAADCeUAgAAAAAAYDih\nFAAAAAAAAMMJpQAAAAAAABhOKLVFVdXHqqqr6llz1wIAAAAAALBeC3MXwL6rqm1JTkuS7n71vNUA\nAAAAAADsO51SB5ZtSV41HQAAAAAAAAcMoRQAAAAAAADDCaUAAAAAAAAYTih1gKiqjyW5aMl5Lzte\nPVtxAAAAAAAAe7EwdwHss68nuSzJXafzS5fNX7u55QAAAAAAAOw7odQBorufWlXHZ+qW6u5jZi0I\nAAAAAABgDYRStxNVdf4qUydsaiEAAAAAAMDtkt+UAgAAAAAAYDidUrcT3X3yStenDqqTNrkcAAAA\nAADgdkanFAAAAAAAAMMJpQAAAAAAABhOKAUAAAAAAMBwQqkDyy27P1RVzVkIAAAAAADAWgilDixX\nL/m8bbYqAAAAAAAA1kgodQDp7iuT/Mt0+tNz1gIAAAAAALAWQqkDz+9O429U1bVVdfF0nDZrVQAA\nAAAAAHuwMHcBrNmvJLkuyTOSfGuS+07Xvc4PAAAAAADYsoRSW1R3n7LK9V1JXjcdAAAAAAAABwSv\n7wMAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAAMJxQ\nCgAAAAAAgOGEUgAAAAAAAAy3MHcBcFu2feeOuUsAAAAAAIAtQacUAAAAAAAAwwmlAAAAAAAAGE4o\nBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABguIW5\nC4DbmrMPOXHuEgBgy9m+c8fcJQAAAAAz0ykFAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAAMJxQCgAA\nAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcEIpAAAA\nAAAAhluYuwD2XVWdkuSUJH/b3e+etxoAAAAAAIB9p1PqwHJKklcl+dGZ6wAAAAAAAFgToRQAAAAA\nAADDCaUAAAAAAAAYTig1k6o6qKpOq6rPVtUNVfW1qvpfVfXIab6n4/jp6Cy+ui9JfmrJ/L+um+u7\nAAAAAAAA7M3C3AXcHlXVwUnek+QHpks3Z/F/8YNJnlBVT192y64klyY5IsnhSW5MctUKawAAAAAA\nALYknVLz+OUsBlK7kpyW5MjuPirJ8Uk+kOR3ly7u7i929zFJ3jBd+pPuPmbZ8cXNKx8AAAAAAGBt\ndEptsqq6c5IXTaev7O437Z7r7i9U1VOTfDrJtg1+7vmrTJ2wkc8BAAAAAABYiU6pzff9+f+v4Hvz\n8snu/kaSN252UQAAAAAAACPplNp8D53Gv+3ua1dZ8/GNfmh3n7zS9amD6qSNfh4AAAAAAMBSOqU2\n312n8ct7WPMvm1EIAAAAAADAZhFKAQAAAAAAMJxQavNdNo3H7mHNnuYAAAAAAAAOOEKpzfc30/iQ\nqjpilTWPXuX6LdNYG1sSAAAAAADAWEKpzfehJNclOTTJ85dPVtVCkheucu/V07htTGkAAAAAAABj\nCKU2WXdfk+TM6fT0qvq5qjosSarqPknemeR+q9z+99P4qKp6wNhKAQAAAAAANo5Qah6/msWOqYUk\nb05ydVVdkeQLSZ6U5D8sWbtzyeePJfmnJEcn+XxVfbWqLp6O4zalcgAAAAAAgP0glJpBd9+U5AeT\nvCjJhUl2Jbk5yTlJHpPkL5Ysv3LJfd9I8n1J/nuSS5IcleS+07GwGbUDAAAAAADsD0HGTLr75iRv\nnI5bqarvmz5+obtvWHbfF5L85PgKAQAAAAAANo5Oqa3pxdP44VmrAAAAAAAA2CBCqRlU1UFV9c6q\nemJV3WXJ9W+vqncmeUKSb2Tx96YAAAAAAAAOeF7fN49K8mPTkaq6Oov/iztN87ckeUF3XzBPeQAA\nAAAAABtLKDWPXUmel8WOqAcluXuSg5J8Icm5SX6zuz8zX3kAAAAAAAAbSyg1g+7uJG+dDgAAAAAA\ngNs8vykFAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAAMJzflIINtn3njrlLAAAAAACALUenFAAAAAAA\nAMMJpQAAAAAAABhOKAUAAAAAAMBwQikAAAAAAACGE0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAA\nDCeUAgAAAAAAYLiFuQuArezsQ06cuwQAuE3YvnPH3CUAAAAAM9MpBQAAAAAAwHBCKQAAAAAAAIYT\nSgEAAAANtOM2AAAR3klEQVQAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAA\nwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAM\nJ5QCAAAAAABgOKEUAAAAAAAAwwml1qmqLq6qrqpTqurYqnpbVX2xqm6oqh1V9cKqusOS9adW1cer\n6sqqurqq3ldVD1wyX1X1f6Y9X7CXZ//ltO7XRn5HAAAAAACA9RJKbZz7JflMkp9NcmSSg5OckOSN\nSd6UJFV1RpI/TfI9Wfzb3znJk5J8vKoekCTd3Un+27TnT6/2sKq6f5JHT6e/v8HfBQAAAAAAYEMJ\npTbOmUkuSvLg7r5LFoOpV0xzz/9/7d17sHVlXQfw708QeUFA8DKYli+mI5haSKWjVlp5zVtqM6ld\nrBlsqmmkmwZjZaWmWaN209SKsrExm/BeOgomGdXExRKBnAKtFFEIAXnBgF9/7PXG6XTew3vO2c/Z\n5/L5zOxZ+1lr7Wc/Z+D3Pmvv715rVdUZSX4qyWlJjunuo5M8NMllSe6W5BVL+jozya1JHl5VDzvA\n+/1Qkkpybnd/6o4GV1Xnr/TILDgDAAAAAAAYSig1P7cleUp3/1OSdPeN3f3yJGdnFh69IsnLu/v1\n3f3laZ9PJDl1ev3Tq+qwaf1nk7xvWv//zpaaLgf4g1PzD5ZvBwAAAAAA2GqEUvPzxu6+doX1H5qW\nX8nsUn7LfSzJTUnukuQBS9a/ZVp+X1XdedlrHp/kvkmuT/KOgxlcd5+y0iPJpQfzegAAAAAAgI0Q\nSs3PPx9g/VXT8oruvmH5xu6+LckXp+axSza9P8lnk9wjydOWveyHp+Xb9591BQAAAAAAsJUJpebn\ncwdYf+sdbF+6z/+eEdXdt2Z2b6lkySX8quq4JM+Ymi7dBwAAAAAAbAtCqa3t95N0kidV1fHTuudl\ndqm/S7r7vIWNDAAAAAAAYA2EUltYd/9bkrOTHJrk+6fV+y/d94cLGRQAAAAAAMA6CKW2vrdMyx+q\nqq9PcnKSW5L88eKGBAAAAAAAsDZCqa3vrCRXJzkpye9M697X3Z9f3JAAAAAAAADWRii1xXX3zUne\nOjUfPS3/YEHDAQAAAAAAWBeh1PbwliXPr0zy/kUNBAAAAAAAYD2EUttAd1+c5F+m5lu7+5ZFjgcA\nAAAAAGCtDl30ALa77t57B9vPTHLmRvqoqq9O8oCp6dJ9AAAAAADAtuNMqe3hhZn9tzq3uy9d9GAA\nAAAAAADWSii1xVXVyUleNDVft8ixAAAAAAAArJfL921RVfU3Se6f5PgkleSjSc5a6KAAAAAAAADW\nyZlSW9d9k9w7yVVJfj/Js7q7FzskAAAAAACA9XGm1BbV3XsXPQYAAAAAAIB5caYUAAAAAAAAwwml\nAAAAAAAAGM7l+2AVz735kkUPAQAAAAAAdgRnSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QC\nAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoA\nAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAA\nAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAA\nAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAA\nAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAA\nAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBCKQAAAAAA\nAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5QCAAAAAABgOKEUAAAAAAAAw1V3L3oMLFBV\nXb1nz57jTjrppEUPBQAAAAAA2GIuueSS7Nu375ruvvtG+xJK7XJVdXmSo5NcseChLNKJ0/LShY4C\nWAt1C9uLmoXtR93C9qJmYXtRs7D97Pa63Zvkuu4+YaMdCaXY9arq/CTp7lMWPRbg4Khb2F7ULGw/\n6ha2FzUL24uahe1H3c6Pe0oBAAAAAAAwnFAKAAAAAACA4YRSAAAAAAAADCeUAgAAAAAAYDihFAAA\nAAAAAMNVdy96DAAAAAAAAOxwzpQCAAAAAABgOKEUAAAAAAAAwwmlAAAAAAAAGE4oBQAAAAAAwHBC\nKQAAAAAAAIYTSgEAAAAAADCcUAoAAAAAAIDhhFIAAAAAAAAMJ5Ri16iqu1TVE6vqpVX1rqr6bFX1\n9HjSnN7jsKp6cVVdVFU3VNW1VXVeVb2wqmoe7wG7TVUdXVUvr6pLqurGqrq6qj5cVc/ZQJ97l9T/\nao9vnOffAjtBVR1fVa+vqn+tqpuq6vNV9Z6q+o4N9jv3WgfmX7NV9diDnEPvMe+/BXayqjqqqp5e\nVb9SVX9ZVV9cUk8nzqF/8yzM2ai6NdfCGFX1NVV12nQs/Jmqurmqrq+qj1fVq6rq3hvsf8hn5Z2o\nunvRY4BNUVXfkOTCA2x+cnf/1Qb7PzrJ2UlOmVbdmOTQJIdN7fcm+e7uvmUj7wO7SVXdN8lHk5ww\nrbohyeGZ1VaSvKG7f2wd/e5NcvnU/Pwquz6xuz++1v5hp6qqh2U21919WnVdkrtm9kOnTnJGd79q\nHf0OqXXY7UbUbFU9Nsk5SW5L8oVVdn1wd1+z1jHDblVVz0xy1gE2n9Tdl26gb/MsDDCqbs21MH9V\n9dVJPp1k6UkD1yU5MskhU/u/kjy7u89ZR/9DPivvVM6UYre5NsmHk7wqybPn3PebMwukrknytMz+\n4TkiyQuS3JTkqUl+ac7vCTvWdHbhn2f24fmKJI/u7qOSHJXkxZkdoP9oVZ26kffp7uNXeQikYFJV\ne5K8O7OD7AuTPKS7j0lybJLfyOzg/pVV9YQ19rsptQ67zaiaXeLf72AO9SUZrN1VSd6f2efGF86j\nQ/MsDDf3ul3CXAvzsz94el+S70ly3HRsfESSp2T2w+Vjk7yzqo5fS8ebcNy94zhTil2jqu6UpHvJ\n//RVtf/5hs6UqqqTk1wwNZ/R3e9etv1FSV6XZF+Svd191XrfC3aLJb86uy3JKd190bLtr01yWpIr\nk9yvu7+yhr73ZjpTqrtdWhMOQlWdluS1mf26+sTu/s9l289K8swkF3T3KSt0caB+h9U67GYDa/ax\nmf16+9PdvXduA4ZdrqoO6e5bl7T35vYz+zdyxoV5FgYZWLePjbkW5qqqjsnsO9kVf3w8XXLzwszO\nJH5Zdx/0iQWjjrt3MmdKsWt0921LA6k5e960vGx5IDV5U5IvJdmT5FmDxgA7zfOn5YeWf3ie/Hpm\np0Afn+TbN21UsHvtr8m3LT/InrxmWj68qh60jn7VOszXqJoFBlj6xfacmWdhkIF1C8xZd39ptavh\nTCHy303NtQZHjrvXSCgF8/G4afnBlTZ2974k505NB/pwcPbX1QdW2jhN9BdPTXUFA1XVUbn9wHzF\nmszsAP5L0/O13MhVrcOcDa5ZYHsxzwLAwbl6Wh6y6l5LOO5eH6EUbNB0je4Tp+bFq+z6yWn54LEj\ngu2vqu6V228OObSuquq8qrquqvZV1eVV9SdV9Zj19gc71Em5/YawK9Zkd9+W5LKpeVA1uZm1DrvM\nkJpd5p5VdUFVfXl6/EtVvamqHrqOvoABzLOw7ZlrYZNU1aFJHj01P7GGl27GcfeOI5SCjTs6yZHT\n88+ust/+bfceOxzYEZbWyei6emRm19hPkr2ZnXZ9blW9bgqdgXE1uZm1DrvJZtTWEUlOTnJzkkOT\nPDDJqUkurKqfWUd/wPyZZ2F7M9fC5vnxzC5le1uSP1rD68y16yCUgo07csnzfavsd+O0vOvAscBO\nMbqubkryu0m+NclR3X23zA74T0nynmmfFyU5fY39wk41qibNoTDGyNq6NrPr4n9jkj3dfVxmc+i3\nJfnbzC538pqqet6BuwA2iXkWtidzLWyiqnpYkl+dmr/d3Z9cbf9lzLXrIJRiS6uqX6iqW9b5eMWi\nxw+7zXap2e6+srt/vLvP7e4bpnXd3Rd099OTvGPa9YyquttmjQsAtrruvqi7X9zd53f3TdO6W7v7\no5ndu+Zj066vriqfNwFgjcy1sHmq6t5J3plkT5Lzk7xksSPaHfzDxVZ3p8x+AbLex2b48pLne1bZ\n74hpecPAscCizatmF11X+w9CjoybUEIyriYXXeuwUy2ktrr7K0l+fmreN7NLDgGLY56FHcZcC/NT\nVccl+WCSE5J8Ksl37Q+C18Bcuw5CKba07n5Zd9c6Hz+3ScO8Lrf/A/RVq+y3f9vnxg4HFmeONbv0\nOrybXlfdfXmSL0zN+8+zb9imRtXkQmsddrBF1tbfL3luDoXFMs/CzmSuhQ2qqmOSfCDJQ5J8Jsl3\ndvfn19GVuXYdhFKwQd3dSS6Zml+3yq4PnpZruS4p7Erd/YUkX5ya6goW79IkPT1fsSanS4c8aGoe\nVE2qdRhmSM0C24t5FgD+v6o6Msn7M7tv25WZBVKfWWd3jrvXQSgF83HOtHz8Shur6vAk3zI1P7wp\nI4Lt747q6j65fcKfa11V1QlJ7jk1L59n37Addff1Sf5xaq5Yk0kekeSY6flaanJhtQ471eCavSOP\nWPLcHAqLZ56FncdcC+tUVXuSvCfJo5JcnVkg9an19rfg4+5tSygF8/Gn0/LEqnrqCttPzewfn31J\nztq0UcH29rZp+YSq+voVtv9Uksrs1OdzVth+QFVVd7DLK6flviRnr6Vv2MH21+Tzp5vBLvcz0/L8\n7r5sHf3OvdZhlxtSs6vNoVV15yS/PDU/l+SCg+0XGMY8C9uMuRbGqKrDkvxFkscluTbJE7r74jl0\nPeqz8o4llGJXqapjq+oe+x9LNh29dP00yS9/7RVV1VV15vJt3X1hkj+bmmdW1VOm1xxSVT+Q5NXT\nttd291Xz/atgx3pXZtfKvlOSs6rqkUlSVXepqp9Octq03y9ON3v9P1ar2SQfqarTq+ohVXXItH9V\n1clVdVaS7532e3V3XzPnvwu2q99L8ukkRyV5b1U9OEmq6qiq+rUkz5r2O2P5C6da7Kp62Qr9bqjW\ngQMaVbOfqKqfqKoH7v/SbDrmfUxmv/x8zLTf6d1923z/JNjZln1WPXbJprst+7x6p2WvM8/Cggyq\nW3MtzNn03c/bkjwpyfVJntzdBxXqVtXeJTX7ghV2Wfdx92516KIHAJvswiT3W2H925e1H5fkI2vs\n+9QkX5vklCTvq6obkxyS5C7T9vcm+cU19gm7Vnd3VT0nyUeTnJDkvKq6IcnhuX3+emN3v3kd3d8v\ns7OhXpnkv6vquiRHJNmzZJ/fyu2/QINdr7v3VdUzMvsg/PAkF0+1c9fMvujqJGd09wfX2O/IWodd\na1TNZnbvmd+cnt9cVdcnOTrJYdO6W5K8tLv/aKN/A+xCXzjA+vOWtU9IcsXBdGieheHmXrcx18II\nj07y7On5nZO8c5WTEv+9u7/pYDseeNy9YwmlYE66+7qqelSSn0zy3CQPSHJzZkHYHyZ5c3f3Kl0A\ny3T3f1TVNyR5SWa/LNmb2S9aLkryhu5+xzq7/tnMrvX7zUmOT3Jckq8kuSzJx5K8qbv/fmOjh52n\nuz9eVQ9JcnqSpya5T2bX4f6HzM4GXtf1sQfWOuxqg2r2RzL7UH9Kkntl9qvwfZnNoX+dWc26gTNs\nIeZZ2HbMtTB/S89WPHx6HMhNa+181Gflnap8Rw4AAAAAAMBo7ikFAAAAAADAcEIpAAAAAAAAhhNK\nAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4oRQAAAAAAADDCaUAAAAAAAAYTigF\nAAAAAADAcEIpAAAAAAAAhhNKAQAAAAAAMJxQCgAAAAAAgOGEUgAAAAAAAAwnlAIAAAAAAGA4oRQA\nAAAAAADDCaUAAAAAAAAYTigFAAAAAADAcP8DDCs6RbDo/5IAAAAASUVORK5CYII=\n", "text/plain": [ "" ] }, "metadata": { "image/png": { "height": 563, "width": 850 } }, "output_type": "display_data" } ], "source": [ "# change default style figure and font size\n", "plt.rcParams['figure.figsize'] = 12, 8\n", "plt.rcParams['font.size'] = 12\n", "\n", "feature_names = vect.get_feature_names()\n", "coef_plot = vis_coef(logreg, feature_names, topn = 10)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the example above, we were doing the bag of word transformation and fitting the model as separate steps. i.e. calling `CountVectorizer().fit_transform` and then `model().fit`, as we can imagine this can become tedious if we were trying to string together multiple preprocessing steps. To streamline these type of preprocessing and modeling pipeline. Scikit-learn provides an `Pipeline` object. To learn more about this topic, [scikit-learn's Documentation: Pipeline and FeatureUnion: combining estimators](http://scikit-learn.org/stable/modules/pipeline.html) has a really nice introduction on this subject." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [Youtube: PyCon 2016 Machine Learning with Text in scikit-learn](https://www.youtube.com/watch?v=znfy3T9OiAQ)\n", "- [Github: PyCon 2016 Machine Learning with Text in scikit-learn](https://github.com/justmarkham/pycon-2016-tutorial)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "toc": { "nav_menu": { "height": "322px", "width": "252px" }, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "331px" }, "toc_section_display": "block", "toc_window_display": true }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }