{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " \n", "## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n", "\n", "Author: [Arseny Kravchenko](http://arseny.info/pages/about-me.html). Translated and edited by [Christina Butsko](https://www.linkedin.com/in/christinabutsko/), [Yury Kashnitskiy](https://yorko.github.io/), [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiamanokhina/), [Anna Larionova](https://www.linkedin.com/in/anna-larionova-74434689/), [Evgeny Sushko](https://www.linkedin.com/in/evgenysushko/) and [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/). This material is subject to the terms and conditions of the [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. Free use is permitted for any non-commercial purpose." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#
Topic 6. Feature Engineering and Feature Selection
\n", "In this course, we have already seen several key machine learning algorithms. However, before moving on to the more fancy ones, we’d like to take a small detour and talk about data preparation. The well-known concept of “garbage in — garbage out” applies 100% to any task in machine learning. Any experienced professional can recall numerous times when a simple model trained on high-quality data was proven to be better than a complicated multi-model ensemble built on data that wasn’t clean.\n", "\n", "To start, I wanted to review three similar but different tasks:\n", "* **feature extraction** and **feature engineering**: transformation of raw data into features suitable for modeling;\n", "* **feature transformation**: transformation of data to improve the accuracy of the algorithm;\n", "* **feature selection**: removing unnecessary features.\n", "\n", "This article will contain almost no math, but there will be a fair amount of code. Some examples will use the dataset from Renthop company, which is used in the [Two Sigma Connect: Rental Listing Inquiries Kaggle competition](https://www.kaggle.com/c/two-sigma-connect-rental-listing-inquiries). The file `train.json` is also kept [here](https://drive.google.com/open?id=1_lqydkMrmyNAgG4vU4wVmp6-j7tV0XI8) as `renthop_train.json.gz` (so do unpack it first). In this task, you need to predict the popularity of a new rental listing, i.e. classify the listing into three classes: `['low', 'medium' , 'high']`. To evaluate the solutions, we will use the log loss metric (the smaller, the better). Those who do not have a Kaggle account, will have to register; you will also need to accept the rules of the competition in order to download the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# preload dataset automatically, if not already in place.\n", "import os\n", "\n", "import requests\n", "\n", "url = \"https://drive.google.com/uc?export=download&id=1_lqydkMrmyNAgG4vU4wVmp6-j7tV0XI8\"\n", "file_name = \"../../data/renthop_train.json.gz\"\n", "\n", "\n", "def load_renthop_dataset(url, target, overwrite=False):\n", " # check if exists already\n", " if os.path.isfile(target) and not overwrite:\n", " print(\"Dataset is already in place\")\n", " return\n", "\n", " print(\"Will download the dataset from\", url)\n", "\n", " response = requests.get(url)\n", " open(target, \"wb\").write(response.content)\n", "\n", "\n", "load_renthop_dataset(url, file_name)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:07.067528Z", "start_time": "2018-03-15T14:06:02.181930Z" } }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "df = pd.read_json(file_name, compression=\"gzip\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Article outline\n", "\n", "1. Feature Extraction\n", " 1. Texts\n", " 2. Images\n", " 3. Geospatial data\n", " 4. Date and time\n", " 5. Time series, web, etc.\n", "\n", "2. Feature transformations\n", " 1. Normalization and changing distribution\n", " 2. Interactions\n", " 3. Filling in the missing values\n", "\n", "3. Feature selection\n", " 1. Statistical approaches\n", " 2. Selection by modeling\n", " 3. Grid search" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature Extraction\n", "\n", "In practice, data rarely comes in the form of ready-to-use matrices. That's why every task begins with feature extraction. Sometimes, it can be enough to read the csv file and convert it into `numpy.array`, but this is a rare exception. Let's look at some of the popular types of data from which features can be extracted." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Texts\n", "\n", "Text is a type of data that can come in different formats; there are so many text processing methods that cannot fit in a single article. Nevertheless, we will review the most popular ones.\n", "\n", "Before working with text, one must tokenize it. Tokenization implies splitting the text into units (hence, tokens). Most simply, tokens are just the words. But splitting by word can lose some of the meaning -- \"Santa Barbara\" is one token, not two, but \"rock'n'roll\" should not be split into two tokens. There are ready-to-use tokenizers that take into account peculiarities of the language, but they make mistakes as well, especially when you work with specific sources of text (newspapers, slang, misspellings, typos).\n", "\n", "After tokenization, you will normalize the data. For text, this is about stemming and/or lemmatization; these are similar processes used to process different forms of a word. One can read about the difference between them [here](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html).\n", "\n", "So, now that we have turned the document into a sequence of words, we can represent it with vectors. The easiest approach is called Bag of Words: we create a vector with the length of the vocabulary, compute the number of occurrences of each word in the text, and place that number of occurrences in the appropriate position in the vector. The process described looks simpler in code:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:07.087385Z", "start_time": "2018-03-15T14:06:07.068964Z" } }, "outputs": [], "source": [ "texts = [\"i have a cat\", \"you have a dog\", \"you and i have a cat and a dog\"]\n", "\n", "vocabulary = list(\n", " enumerate(set([word for sentence in texts for word in sentence.split()]))\n", ")\n", "print(\"Vocabulary:\", vocabulary)\n", "\n", "\n", "def vectorize(text):\n", " vector = np.zeros(len(vocabulary))\n", " for i, word in vocabulary:\n", " num = 0\n", " for w in text:\n", " if w == word:\n", " num += 1\n", " if num:\n", " vector[i] = num\n", " return vector\n", "\n", "\n", "print(\"Vectors:\")\n", "for sentence in texts:\n", " print(vectorize(sentence.split()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is an illustration of the process:\n", "\n", "\n", "\n", "This is an extremely naive implementation. In practice, you need to consider stop words, the maximum length of the vocabulary, more efficient data structures (usually text data is converted to a sparse vector), etc.\n", "\n", "When using algorithms like Bag of Words, we lose the order of the words in the text, which means that the texts \"i have no cows\" and \"no, i have cows\" will appear identical after vectorization when, in fact, they have the opposite meaning. To avoid this problem, we can revisit our tokenization step and use N-grams (the *sequence* of N consecutive tokens) instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:08.998673Z", "start_time": "2018-03-15T14:06:07.088376Z" } }, "outputs": [], "source": [ "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "vect = CountVectorizer(ngram_range=(1, 1))\n", "vect.fit_transform([\"no i have cows\", \"i have no cows\"]).toarray()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:09.002804Z", "start_time": "2018-03-15T14:06:08.999801Z" } }, "outputs": [], "source": [ "vect.vocabulary_" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:09.110448Z", "start_time": "2018-03-15T14:06:09.003924Z" } }, "outputs": [], "source": [ "vect = CountVectorizer(ngram_range=(1, 2))\n", "vect.fit_transform([\"no i have cows\", \"i have no cows\"]).toarray()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:09.218867Z", "start_time": "2018-03-15T14:06:09.113204Z" } }, "outputs": [], "source": [ "vect.vocabulary_" ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-03-14T14:13:25.767656Z", "start_time": "2018-03-14T14:13:25.763924Z" } }, "source": [ "Also note that one does not have to use only words. In some cases, it is possible to generate N-grams of characters. This approach would be able to account for similarity of related words or handle typos." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:09.774148Z", "start_time": "2018-03-15T14:06:09.220060Z" } }, "outputs": [], "source": [ "from scipy.spatial.distance import euclidean\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "vect = CountVectorizer(ngram_range=(3, 3), analyzer=\"char_wb\")\n", "\n", "n1, n2, n3, n4 = vect.fit_transform(\n", " [\"andersen\", \"petersen\", \"petrov\", \"smith\"]\n", ").toarray()\n", "\n", "euclidean(n1, n2), euclidean(n2, n3), euclidean(n3, n4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Adding onto the Bag of Words idea: words that are rarely found in the corpus (in all the documents of this dataset) but are present in this particular document might be more important. Then it makes sense to increase the weight of more domain-specific words to separate them out from common words. This approach is called TF-IDF (term frequency-inverse document frequency), which cannot be written in a few lines, so you should look into the details in references such as [this wiki](https://en.wikipedia.org/wiki/Tf%E2%80%93idf). The default option is as follows:\n", "\n", "$$ \\large idf(t,D) = \\log\\frac{\\mid D\\mid}{df(d,t)+1} $$\n", "\n", "$$ \\large tfidf(t,d,D) = tf(t,d) \\times idf(t,D) $$\n", "\n", "Ideas similar to Bag of Words can also be found outside of text problems e.g. bag of sites in the [Catch Me If You Can competition](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking), [bag of apps](https://www.kaggle.com/xiaoml/talkingdata-mobile-user-demographics/bag-of-app-id-python-2-27392), [bag of events](http://www.interdigital.com/download/58540a46e3b9659c9f000372), etc.\n", "\n", "![image](../../img/bag_of_words.png)\n", "\n", "Using these algorithms, it is possible to obtain a working solution for a simple problem, which can serve as a baseline. However, for those who do not like the classics, there are new approaches. The most popular method in the new wave is [Word2Vec](https://arxiv.org/pdf/1310.4546.pdf), but there are a few alternatives as well ([GloVe](https://nlp.stanford.edu/pubs/glove.pdf), [Fasttext](https://arxiv.org/abs/1607.01759), etc.).\n", "\n", "Word2Vec is a special case of the word embedding algorithms. Using Word2Vec and similar models, we can not only vectorize words in a high-dimensional space (typically a few hundred dimensions) but also compare their semantic similarity. This is a classic example of operations that can be performed on vectorized concepts: king - man + woman = queen.\n", "\n", "![image](https://cdn-images-1.medium.com/max/800/1*K5X4N-MJKt8FGFtrTHwidg.gif)\n", "\n", "It is worth noting that this model does not comprehend the meaning of the words but simply tries to position the vectors such that words used in common context are close to each other. If this is not taken into account, a lot of fun examples will come up.\n", "\n", "Such models need to be trained on very large datasets in order for the vector coordinates to capture the semantics. A pretrained model for your own tasks can be downloaded [here](https://github.com/3Top/word2vec-api#where-to-get-a-pretrained-models).\n", "\n", "Similar methods are applied in other areas such as bioinformatics. An unexpected application is [food2vec](https://jaan.io/food2vec-augmented-cooking-machine-intelligence/). You can probably think of a few other fresh ideas; the concept is universal enough." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Images\n", "\n", "Working with images is easier and harder at the same time. It is easier because it is possible to just use one of the popular pretrained networks without much thinking but harder because, if you need to dig into the details, you may end up going really deep. Let's start from the beginning.\n", "\n", "In a time when GPUs were weaker and the \"renaissance of neural networks\" had not happened yet, feature generation from images was its own complex field. One had to work at a low level, determining corners, borders of regions, color distributions statistics, and so on. Experienced specialists in computer vision could draw a lot of parallels between older approaches and neural networks; in particular, convolutional layers in today's networks are similar to [Haar cascades](https://en.wikipedia.org/wiki/Haar-like_feature). If you are interested in reading more, here are a couple of links to some interesting libraries: [skimage](http://scikit-image.org/docs/stable/api/skimage.feature.html) and [SimpleCV](http://simplecv.readthedocs.io/en/latest/SimpleCV.Features.html).\n", "\n", "Often for problems associated with images, a convolutional neural network is used. You do not have to come up with the architecture and train a network from scratch. Instead, download a pretrained state-of-the-art network with the weights from public sources. Data scientists often do so-called fine-tuning to adapt these networks to their needs by \"detaching\" the last fully connected layers of the network, adding new layers chosen for a specific task, and then training the network on new data. If your task is to just vectorize the image (for example, to use some non-network classifier), you only need to remove the last layers and use the output from the previous layers:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:25.714680Z", "start_time": "2018-03-15T14:06:09.775547Z" } }, "outputs": [], "source": [ "# doesn't work with Python 3.7\n", "# # Install Keras and tensorflow (https://keras.io/)\n", "# from keras.applications.resnet50 import ResNet50, preprocess_input\n", "# from keras.preprocessing import image\n", "# from scipy.misc import face\n", "# import numpy as np\n", "\n", "# resnet_settings = {'include_top': False, 'weights': 'imagenet'}\n", "# resnet = ResNet50(**resnet_settings)\n", "\n", "# # What a cute raccoon!\n", "# img = image.array_to_img(face())\n", "# img" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:27.770041Z", "start_time": "2018-03-15T14:06:25.718729Z" } }, "outputs": [], "source": [ "# # In real life, you may need to pay more attention to resizing\n", "# img = img.resize((224, 224))\n", "\n", "# x = image.img_to_array(img)\n", "# x = np.expand_dims(x, axis=0)\n", "# x = preprocess_input(x)\n", "\n", "# # Need an extra dimension because model is designed to work with an array\n", "# # of images - i.e. tensor shaped (batch_size, width, height, n_channels)\n", "\n", "# features = resnet.predict(x)" ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-03-14T14:44:25.102755Z", "start_time": "2018-03-14T14:44:24.374869Z" } }, "source": [ "\n", "\n", "*Here's a classifier trained on one dataset and adapted for a different one by \"detaching\" the last layer and adding a new one instead.*\n", "\n", "Nevertheless, we should not focus too much on neural network techniques. Features generated by hand are still very useful: for example, for predicting the popularity of a rental listing, we can assume that bright apartments attract more attention and create a feature such as \"the average value of the pixel\". You can find some inspiring examples in the documentation of [relevant libraries](http://pillow.readthedocs.io/en/3.1.x/reference/ImageStat.html).\n", "\n", "If there is text on the image, you can read it without unraveling a complicated neural network. For example, check out [pytesseract](https://github.com/madmaze/pytesseract)." ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-03-14T14:47:46.671934Z", "start_time": "2018-03-14T14:47:43.945326Z" } }, "source": [ "```python\n", "import pytesseract\n", "from PIL import Image\n", "import requests\n", "from io import BytesIO\n", "\n", "##### Just a random picture from search\n", "img = 'http://ohscurrent.org/wp-content/uploads/2015/09/domus-01-google.jpg'\n", "\n", "img = requests.get(img)\n", "img = Image.open(BytesIO(img.content))\n", "text = pytesseract.image_to_string(img)\n", "\n", "text\n", "\n", "Out: 'Google'\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One must understand that `pytesseract` is not a solution for everything." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "##### This time we take a picture from Renthop\n", "img = requests.get('https://photos.renthop.com/2/8393298_6acaf11f030217d05f3a5604b9a2f70f.jpg')\n", "img = Image.open(BytesIO(img.content))\n", "pytesseract.image_to_string(img)\n", "\n", "Out: 'Cunveztible to 4}»'\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another case where neural networks cannot help is extracting features from meta-information. For images, EXIF stores many useful meta-information: manufacturer and camera model, resolution, use of the flash, geographic coordinates of shooting, software used to process image and more." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Geospatial data\n", "\n", "Geographic data is not so often found in problems, but it is still useful to master the basic techniques for working with it, especially since there are quite a number of ready-to-use solutions in this field.\n", "\n", "Geospatial data is often presented in the form of addresses or coordinates of (Latitude, Longitude). Depending on the task, you may need two mutually-inverse operations: geocoding (recovering a point from an address) and reverse geocoding (recovering an address from a point). Both operations are accessible in practice via external APIs from Google Maps or OpenStreetMap. Different geocoders have their own characteristics, and the quality varies from region to region. Fortunately, there are universal libraries like [geopy](https://github.com/geopy/geopy) that act as wrappers for these external services.\n", "\n", "If you have a lot of data, you will quickly reach the limits of external API. Besides, it is not always the fastest to receive information via HTTP. Therefore, it is necessary to consider using a local version of OpenStreetMap.\n", "\n", "If you have a small amount of data, enough time, and no desire to extract fancy features, you can use `reverse_geocoder` in lieu of OpenStreetMap:" ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-03-14T15:12:50.468269Z", "start_time": "2018-03-14T15:12:50.455393Z" } }, "source": [ "```python\n", "import reverse_geocoder as revgc\n", "\n", "revgc.search((df.latitude, df.longitude))\n", "Loading formatted geocoded file... \n", "\n", "Out: [OrderedDict([('lat', '40.74482'), \n", " ('lon', '-73.94875'), \n", " ('name', 'Long Island City'), \n", " ('admin1', 'New York'), \n", " ('admin2', 'Queens County'), \n", " ('cc', 'US')])]\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When working with geoсoding, we must not forget that addresses may contain typos, which makes the data cleaning step necessary. Coordinates contain fewer misprints, but its position can be incorrect due to GPS noise or bad accuracy in places like tunnels, downtown areas, etc. If the data source is a mobile device, the geolocation may not be determined by GPS but by WiFi networks in the area, which leads to holes in space and teleportation. While traveling along in Manhattan, there can suddenly be a WiFi location from Chicago.\n", "\n", "> WiFi location tracking is based on the combination of SSID and MAC-addresses, which may correspond to different points e.g. federal provider standardizes the firmware of routers up to MAC-address and places them in different cities. Even a company's move to another office with its routers can cause issues.\n", "\n", "The point is usually located among infrastructure. Here, you can really unleash your imagination and invent features based on your life experience and domain knowledge: the proximity of a point to the subway, the number of stories in the building, the distance to the nearest store, the number of ATMs around, etc. For any task, you can easily come up with dozens of features and extract them from various external sources. For problems outside an urban environment, you may consider features from more specific sources e.g. the height above sea level.\n", "\n", "If two or more points are interconnected, it may be worthwhile to extract features from the route between them. In that case, distances (great circle distance and road distance calculated by the routing graph), number of turns with the ratio of left to right turns, number of traffic lights, junctions, and bridges will be useful. In one of my own tasks, I generated a feature called \"the complexity of the road\", which computed the graph-calculated distance divided by the GCD." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Date and time\n", "\n", "You would think that date and time are standardized because of their prevalence, but, nevertheless, some pitfalls remain.\n", "\n", "Let's start with the day of the week, which are easy to turn into 7 dummy variables using one-hot encoding. In addition, we will also create a separate binary feature for the weekend called `is_weekend`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```python\n", "df['dow'] = df['created'].apply(lambda x: x.date().weekday())\n", "df['is_weekend'] = df['created'].apply(lambda x: 1 if x.date().weekday() in (5, 6) else 0)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some tasks may require additional calendar features. For example, cash withdrawals can be linked to a pay day; the purchase of a metro card, to the beginning of the month. In general, when working with time series data, it is a good idea to have a calendar with public holidays, abnormal weather conditions, and other important events.\n", "\n", "> Q: What do Chinese New Year, the New York marathon, and the Trump inauguration have in common?\n", "\n", "> A: They all need to be put on the calendar of potential anomalies.\n", "\n", "Dealing with hour (minute, day of the month ...) is not as simple as it seems. If you use the hour as a real variable, we slightly contradict the nature of data: `0<23` while `0:00:00 02.01> 01.01 23:00:00`. For some problems, this can be critical. At the same time, if you encode them as categorical variables, you'll breed a large numbers of features and lose information about proximity -- the difference between 22 and 23 will be the same as the difference between 22 and 7.\n", "\n", "There also exist some more esoteric approaches to such data like projecting the time onto a circle and using the two coordinates." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:27.782320Z", "start_time": "2018-03-15T14:06:27.772449Z" } }, "outputs": [], "source": [ "def make_harmonic_features(value, period=24):\n", " value *= 2 * np.pi / period\n", " return np.cos(value), np.sin(value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This transformation preserves the distance between points, which is important for algorithms that estimate distance (kNN, SVM, k-means ...)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:27.883311Z", "start_time": "2018-03-15T14:06:27.784833Z" } }, "outputs": [], "source": [ "from scipy.spatial import distance\n", "\n", "euclidean(make_harmonic_features(23), make_harmonic_features(1))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:28.250852Z", "start_time": "2018-03-15T14:06:27.884753Z" } }, "outputs": [], "source": [ "euclidean(make_harmonic_features(9), make_harmonic_features(11))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:28.801865Z", "start_time": "2018-03-15T14:06:28.252109Z" } }, "outputs": [], "source": [ "euclidean(make_harmonic_features(9), make_harmonic_features(21))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "However, the difference between such coding methods is down to the third decimal place in the metric." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Time series, web, etc.\n", "\n", "Regarding time series — we will not go into too much detail here (mostly due to my personal lack of experience), but I will point you to a [useful library that automatically generates features for time series](https://github.com/blue-yonder/tsfresh).\n", "\n", "If you are working with web data, then you usually have information about the user's User Agent. It is a wealth of information. First, one needs to extract the operating system from it. Secondly, make a feature `is_mobile`. Third, look at the browser." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.336832Z", "start_time": "2018-03-15T14:06:28.804134Z" } }, "outputs": [], "source": [ "# Install pyyaml ua-parser user-agents\n", "import user_agents\n", "\n", "ua = \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/56.0.2924.76 Chrome/56.0.2924.76 Safari/537.36\"\n", "ua = user_agents.parse(ua)\n", "\n", "print(\"Is a bot? \", ua.is_bot)\n", "print(\"Is mobile? \", ua.is_mobile)\n", "print(\"Is PC? \", ua.is_pc)\n", "print(\"OS Family: \", ua.os.family)\n", "print(\"OS Version: \", ua.os.version)\n", "print(\"Browser Family: \", ua.browser.family)\n", "print(\"Browser Version: \", ua.browser.version)" ] }, { "cell_type": "markdown", "metadata": { "ExecuteTime": { "end_time": "2018-03-14T16:11:22.538721Z", "start_time": "2018-03-14T16:11:22.534198Z" } }, "source": [ "> As in other domains, you can come up with your own features based on intuition about the nature of the data. At the time of this writing, Chromium 56 was new, but, after some time, only users who haven't rebooted their browser for a long time will have this version. In this case, why not introduce a feature called \"lag behind the latest version of the browser\"?\n", "\n", "In addition to the operating system and browser, you can look at the referrer (not always available), [http_accept_language](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language), and other meta information.\n", "\n", "The next useful piece of information is the IP-address, from which you can extract the country and possibly the city, provider, and connection type (mobile/stationary). You need to understand that there is a variety of proxy and outdated databases, so this feature can contain noise. Network administration gurus may try to extract even fancier features like suggestions for [using VPN](https://habrahabr.ru/post/216295/). By the way, the data from the IP-address is well combined with `http_accept_language`: if the user is sitting at the Chilean proxies and browser locale is `ru_RU`, something is unclean and worth a look in the corresponding column in the table (`is_traveler_or_proxy_user`).\n", "\n", "Any given area has so many specifics that it is too much for an individual to absorb completely. Therefore, I invite everyone to share their experiences and discuss feature extraction and generation in the comments section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature transformations\n", "\n", "### Normalization and changing distribution\n", "\n", "Monotonic feature transformation is critical for some algorithms and has no effect on others. This is one of the reasons for the increased popularity of decision trees and all its derivative algorithms (random forest, gradient boosting). Not everyone can or want to tinker with transformations, and these algorithms are robust to unusual distributions.\n", "\n", "There are also purely engineering reasons: `np.log` is a way of dealing with large numbers that do not fit in `np.float64`. This is an exception rather than a rule; often it's driven by the desire to adapt the dataset to the requirements of the algorithm. Parametric methods usually require a minimum of symmetric and unimodal distribution of data, which is not always given in real data. There may be more stringent requirements; recall [our earlier article about linear models](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-4-linear-classification-and-regression-44a41b9b5220).\n", "\n", "However, data requirements are imposed not only by parametric methods; [K nearest neighbors](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-3-classification-decision-trees-and-k-nearest-neighbors-8613c6b6d2cd) will predict complete nonsense if features are not normalized e.g. when one distribution is located in the vicinity of zero and does not go beyond (-1, 1) while the other’s range is on the order of hundreds of thousands.\n", "\n", "A simple example: suppose that the task is to predict the cost of an apartment from two variables — the distance from city center and the number of rooms. The number of rooms rarely exceeds 5 whereas the distance from city center can easily be in the thousands of meters.\n", "\n", "The simplest transformation is Standard Scaling (or Z-score normalization):\n", "\n", "$$ \\large z= \\frac{x-\\mu}{\\sigma} $$\n", "\n", "Note that Standard Scaling does not make the distribution normal in the strict sense." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.382748Z", "start_time": "2018-03-15T14:06:29.338320Z" } }, "outputs": [], "source": [ "import numpy as np\n", "from scipy.stats import beta, shapiro\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "data = beta(1, 10).rvs(1000).reshape(-1, 1)\n", "shapiro(data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.509590Z", "start_time": "2018-03-15T14:06:29.385020Z" } }, "outputs": [], "source": [ "# Value of the statistic, p-value\n", "shapiro(StandardScaler().fit_transform(data))\n", "\n", "# With such p-value we'd have to reject the null hypothesis of normality of the data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But, to some extent, it protects against outliers:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.602528Z", "start_time": "2018-03-15T14:06:29.511150Z" } }, "outputs": [], "source": [ "data = np.array([1, 1, 0, -1, 2, 1, 2, 3, -2, 4, 100]).reshape(-1, 1).astype(np.float64)\n", "StandardScaler().fit_transform(data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.713603Z", "start_time": "2018-03-15T14:06:29.605288Z" } }, "outputs": [], "source": [ "(data - data.mean()) / data.std()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another fairly popular option is MinMax Scaling, which brings all the points within a predetermined interval (typically (0, 1)).\n", "\n", "$$ \\large X_{norm}=\\frac{X-X_{min}}{X_{max}-X_{min}} $$" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.855619Z", "start_time": "2018-03-15T14:06:29.716000Z" } }, "outputs": [], "source": [ "from sklearn.preprocessing import MinMaxScaler\n", "\n", "MinMaxScaler().fit_transform(data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:29.955155Z", "start_time": "2018-03-15T14:06:29.857042Z" } }, "outputs": [], "source": [ "(data - data.min()) / (data.max() - data.min())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "StandardScaling and MinMax Scaling have similar applications and are often more or less interchangeable. However, if the algorithm involves the calculation of distances between points or vectors, the default choice is StandardScaling. But MinMax Scaling is useful for visualization by bringing features within the interval (0, 255).\n", "\n", "If we assume that some data is not normally distributed but is described by the [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution), it can easily be transformed to a normal distribution:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:30.067680Z", "start_time": "2018-03-15T14:06:29.957011Z" } }, "outputs": [], "source": [ "from scipy.stats import lognorm\n", "\n", "data = lognorm(s=1).rvs(1000)\n", "shapiro(data)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:30.180348Z", "start_time": "2018-03-15T14:06:30.069180Z" } }, "outputs": [], "source": [ "shapiro(np.log(data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The lognormal distribution is suitable for describing salaries, price of securities, urban population, number of comments on articles on the internet, etc. However, to apply this procedure, the underlying distribution does not necessarily have to be lognormal; you can try to apply this transformation to any distribution with a heavy right tail. Furthermore, one can try to use other similar transformations, formulating their own hypotheses on how to approximate the available distribution to a normal. Examples of such transformations are [Box-Cox transformation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.boxcox.html) (logarithm is a special case of the Box-Cox transformation) or [Yeo-Johnson transformation](https://gist.github.com/mesgarpour/f24769cd186e2db853957b10ff6b7a95) (extends the range of applicability to negative numbers). In addition, you can also try adding a constant to the feature — `np.log (x + const)`.\n", "\n", "In the examples above, we have worked with synthetic data and strictly tested normality using the Shapiro-Wilk test. Let’s try to look at some real data and test for normality using a less formal method — [Q-Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot). For a normal distribution, it will look like a smooth diagonal line, and visual anomalies should be intuitively understandable.\n", "\n", "![image](../../img/qq_lognorm.png)\n", "Q-Q plot for lognormal distribution\n", "\n", "![image](../../img/qq_log.png)\n", "Q-Q plot for the same distribution after taking the logarithm" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:31.801140Z", "start_time": "2018-03-15T14:06:30.182573Z" } }, "outputs": [], "source": [ "# Let's draw plots!\n", "import statsmodels.api as sm\n", "\n", "# Let's take the price feature from Renthop dataset and filter by hands the most extreme values for clarity\n", "\n", "price = df.price[(df.price <= 20000) & (df.price > 500)]\n", "price_log = np.log(price)\n", "\n", "# A lot of gestures so that sklearn didn't shower us with warnings\n", "price_mm = (\n", " MinMaxScaler()\n", " .fit_transform(price.values.reshape(-1, 1).astype(np.float64))\n", " .flatten()\n", ")\n", "price_z = (\n", " StandardScaler()\n", " .fit_transform(price.values.reshape(-1, 1).astype(np.float64))\n", " .flatten()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q-Q plot of the initial feature" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:32.717805Z", "start_time": "2018-03-15T14:06:31.802677Z" } }, "outputs": [], "source": [ "sm.qqplot(price, loc=price.mean(), scale=price.std())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q-Q plot after StandardScaler. Shape doesn’t change" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:32.987753Z", "start_time": "2018-03-15T14:06:32.719093Z" } }, "outputs": [], "source": [ "sm.qqplot(price_z, loc=price_z.mean(), scale=price_z.std())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q-Q plot after MinMaxScaler. Shape doesn’t change" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:33.243510Z", "start_time": "2018-03-15T14:06:32.988869Z" } }, "outputs": [], "source": [ "sm.qqplot(price_mm, loc=price_mm.mean(), scale=price_mm.std())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Q-Q plot after taking the logarithm. Things are getting better!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:33.510998Z", "start_time": "2018-03-15T14:06:33.244652Z" } }, "outputs": [], "source": [ "sm.qqplot(price_log, loc=price_log.mean(), scale=price_log.std())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s see whether transformations can somehow help the real model. There is no silver bullet here." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Interactions\n", "\n", "If previous transformations seemed rather math-driven, this part is more about the nature of the data; it can be attributed to both feature transformations and feature creation.\n", "\n", "Let’s come back again to the Two Sigma Connect: Rental Listing Inquiries problem. Among the features in this problem are the number of rooms and the price. Logic suggests that the cost per single room is more indicative than the total cost, so we can generate such a feature." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:33.800002Z", "start_time": "2018-03-15T14:06:33.512381Z" } }, "outputs": [], "source": [ "rooms = df[\"bedrooms\"].apply(lambda x: max(x, 0.5))\n", "# Avoid division by zero; .5 is chosen more or less arbitrarily\n", "df[\"price_per_bedroom\"] = df[\"price\"] / rooms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should limit yourself in this process. If there are a limited number of features, it is possible to generate all the possible interactions and then weed out the unnecessary ones using the techniques described in the next section. In addition, not all interactions between features must have a physical meaning; for example, polynomial features (see [sklearn.preprocessing.PolynomialFeatures](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html)) are often used in linear models and are almost impossible to interpret." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Filling in the missing values\n", "\n", "Not many algorithms can work with missing values, and the real world often provides data with gaps. Fortunately, this is one of the tasks for which one doesn’t need any creativity. Both key python libraries for data analysis provide easy-to-use solutions: [pandas.DataFrame.fillna](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) and [sklearn.preprocessing.Imputer](http://scikit-learn.org/stable/modules/preprocessing.html#imputation).\n", "\n", "These solutions do not have any magic happening behind the scenes. Approaches to handling missing values are pretty straightforward:\n", "\n", "* encode missing values with a separate blank value like `\"n/a\"` (for categorical variables);\n", "* use the most probable value of the feature (mean or median for the numerical variables, the most common value for categorical variables);\n", "* or, conversely, encode with some extreme value (good for decision-tree models since it allows the model to make a partition between the missing and non-missing values);\n", "* for ordered data (e.g. time series), take the adjacent value — next or previous.\n", "\n", "![image](https://cdn-images-1.medium.com/max/800/0*Ps-v8F0fBgmnG36S.)\n", "\n", "Easy-to-use library solutions sometimes suggest sticking to something like `df = df.fillna(0)` and not sweat the gaps. But this is not the best solution: data preparation takes more time than building models, so thoughtless gap-filling may hide a bug in processing and damage the model." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feature selection\n", "\n", "Why would it even be necessary to select features? To some, this idea may seem counterintuitive, but there are at least two important reasons to get rid of unimportant features. The first is clear to every engineer: the more data, the higher the computational complexity. As long as we work with toy datasets, the size of the data is not a problem, but, for real loaded production systems, hundreds of extra features will be quite tangible. The second reason is that some algorithms take noise (non-informative features) as a signal and overfit.\n", "\n", "### Statistical approaches\n", "\n", "The most obvious candidate for removal is a feature whose value remains unchanged, i.e., it contains no information at all. If we build on this thought, it is reasonable to say that features with low variance are worse than those with high variance. So, one can consider cutting features with variance below a certain threshold." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:34.669290Z", "start_time": "2018-03-15T14:06:33.801760Z" } }, "outputs": [], "source": [ "from sklearn.datasets import make_classification\n", "from sklearn.feature_selection import VarianceThreshold\n", "\n", "x_data_generated, y_data_generated = make_classification()\n", "x_data_generated.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:34.674947Z", "start_time": "2018-03-15T14:06:34.670899Z" } }, "outputs": [], "source": [ "VarianceThreshold(0.7).fit_transform(x_data_generated).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:34.801683Z", "start_time": "2018-03-15T14:06:34.676130Z" } }, "outputs": [], "source": [ "VarianceThreshold(0.8).fit_transform(x_data_generated).shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:34.903665Z", "start_time": "2018-03-15T14:06:34.803278Z" } }, "outputs": [], "source": [ "VarianceThreshold(0.9).fit_transform(x_data_generated).shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are other ways that are also [based on classical statistics](http://scikit-learn.org/stable/modules/feature_selection.html#univariate-feature-selection)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:35.235212Z", "start_time": "2018-03-15T14:06:34.908459Z" } }, "outputs": [], "source": [ "from sklearn.feature_selection import SelectKBest, f_classif\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import cross_val_score\n", "\n", "x_data_kbest = SelectKBest(f_classif, k=5).fit_transform(\n", " x_data_generated, y_data_generated\n", ")\n", "x_data_varth = VarianceThreshold(0.9).fit_transform(x_data_generated)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logit = LogisticRegression(solver=\"lbfgs\", random_state=17)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:35.251270Z", "start_time": "2018-03-15T14:06:35.237608Z" } }, "outputs": [], "source": [ "cross_val_score(\n", " logit, x_data_generated, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", ").mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:35.355729Z", "start_time": "2018-03-15T14:06:35.252493Z" } }, "outputs": [], "source": [ "cross_val_score(\n", " logit, x_data_kbest, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", ").mean()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:35.500340Z", "start_time": "2018-03-15T14:06:35.356862Z" } }, "outputs": [], "source": [ "cross_val_score(\n", " logit, x_data_varth, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", ").mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that our selected features have improved the quality of the classifier. Of course, this example is purely artificial; however, it is worth using for real problems." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Selection by modeling\n", "\n", "Another approach is to use some baseline model for feature evaluation because the model will clearly show the importance of the features. Two types of models are usually used: some “wooden” composition such as [Random Forest](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-5-ensembles-of-algorithms-and-random-forest-8e05246cbba7) or a linear model with Lasso regularization so that it is prone to nullify weights of weak features. The logic is intuitive: if features are clearly useless in a simple model, there is no need to drag them to a more complex one." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:35.975262Z", "start_time": "2018-03-15T14:06:35.502079Z" } }, "outputs": [], "source": [ "# Synthetic example\n", "\n", "from sklearn.datasets import make_classification\n", "from sklearn.ensemble import RandomForestClassifier\n", "from sklearn.feature_selection import SelectFromModel\n", "from sklearn.model_selection import cross_val_score\n", "from sklearn.pipeline import make_pipeline\n", "\n", "x_data_generated, y_data_generated = make_classification()\n", "\n", "rf = RandomForestClassifier(n_estimators=10, random_state=17)\n", "pipe = make_pipeline(SelectFromModel(estimator=rf), logit)\n", "\n", "print(\n", " cross_val_score(\n", " logit, x_data_generated, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", " ).mean()\n", ")\n", "print(\n", " cross_val_score(\n", " rf, x_data_generated, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", " ).mean()\n", ")\n", "print(\n", " cross_val_score(\n", " pipe, x_data_generated, y_data_generated, scoring=\"neg_log_loss\", cv=5\n", " ).mean()\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We must not forget that this is not a silver bullet again - it can make the performance worse." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:36.095657Z", "start_time": "2018-03-15T14:06:35.976948Z" } }, "outputs": [], "source": [ "# x_data, y_data = get_data()\n", "x_data = x_data_generated\n", "y_data = y_data_generated\n", "\n", "pipe1 = make_pipeline(StandardScaler(), SelectFromModel(estimator=rf), logit)\n", "\n", "pipe2 = make_pipeline(StandardScaler(), logit)\n", "\n", "print(\n", " \"LR + selection: \",\n", " cross_val_score(pipe1, x_data, y_data, scoring=\"neg_log_loss\", cv=5).mean(),\n", ")\n", "print(\n", " \"LR: \", cross_val_score(pipe2, x_data, y_data, scoring=\"neg_log_loss\", cv=5).mean()\n", ")\n", "print(\"RF: \", cross_val_score(rf, x_data, y_data, scoring=\"neg_log_loss\", cv=5).mean())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Grid search\n", "Finally, we get to the most reliable method, which is also the most computationally complex: trivial grid search. Train a model on a subset of features, store results, repeat for different subsets, and compare the quality of models to identify the best feature set. This approach is called [Exhaustive Feature Selection](http://rasbt.github.io/mlxtend/user_guide/feature_selection/ExhaustiveFeatureSelector/).\n", "\n", "Searching all combinations usually takes too long, so you can try to reduce the search space. Fix a small number N, iterate through all combinations of N features, choose the best combination, and then iterate through the combinations of (N + 1) features so that the previous best combination of features is fixed and only a single new feature is considered. It is possible to iterate until we hit a maximum number of characteristics or until the quality of the model ceases to increase significantly. This algorithm is called [Sequential Feature Selection](http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/).\n", "\n", "This algorithm can be reversed: start with the complete feature space and remove features one by one until it does not impair the quality of the model or until the desired number of features is reached." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2018-03-15T14:06:44.047841Z", "start_time": "2018-03-15T14:06:36.096849Z" } }, "outputs": [], "source": [ "# Install mlxtend\n", "from mlxtend.feature_selection import SequentialFeatureSelector\n", "\n", "selector = SequentialFeatureSelector(\n", " logit, scoring=\"neg_log_loss\", verbose=2, k_features=3, forward=False, n_jobs=-1\n", ")\n", "\n", "selector.fit(x_data, y_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Take a look how this approach was done in one [simple yet elegant Kaggle kernel](https://www.kaggle.com/arsenyinfo/easy-feature-selection-pipeline-0-55-at-lb)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.6" }, "toc": { "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }