{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/fonnesbeck/Bios8366/blob/master/notebooks/Section6_1-Scikit-Learn.ipynb)\n", "\n", "# Introduction to `Scikit-learn`\n", "\n", "The `scikit-learn` package is an open-source library that provides a robust set of machine learning algorithms for Python. It is built upon the core Python scientific stack (*i.e.* NumPy, SciPy, Cython), and has a simple, consistent API, making it useful for a wide range of statistical learning applications.\n", "\n", "![sklearn](images/sklearn.png)\n", "\n", "## What is Machine Learning?\n", "\n", "Machine Learning (ML) is about coding programs that automatically adjust their performance from exposure to information encoded in data. This learning is achieved via **tunable parameters** that are automatically adjusted according to performance criteria.\n", "\n", "Machine Learning can be considered a subfield of Artificial Intelligence (AI).\n", "\n", "There are three major classes of ML:\n", "\n", "**Supervised learning**\n", ": Algorithms which learn from a training set of *labeled* examples (exemplars) to generalize to the set of all possible inputs. Examples of supervised learning include regression and support vector machines.\n", "\n", "**Unsupervised learning**\n", ": Algorithms which learn from a training set of *unlableled* examples, using the features of the inputs to categorize inputs together according to some statistical criteria. Examples of unsupervised learning include k-means clustering and kernel density estimation.\n", "\n", "**Reinforcement learning**\n", ": Algorithms that learn via reinforcement from a *critic* that provides information on the quality of a solution, but not on how to improve it. Improved solutions are achieved by iteratively exploring the solution space. We will not cover RL in this course.\n", "\n", "## Representing Data in `scikit-learn`\n", "\n", "Most machine learning algorithms implemented in scikit-learn expect data to be stored in a\n", "**two-dimensional array or matrix**. The arrays can be\n", "either ``numpy`` arrays, or in some cases ``scipy.sparse`` matrices.\n", "The size of the array is expected to be `[n_samples, n_features]`\n", "\n", "- **n_samples:** The number of samples: each sample is an item to process (e.g. classify).\n", " A sample can be a document, a picture, a sound, a video, an astronomical object,\n", " a row in database or CSV file,\n", " or whatever you can describe with a fixed set of quantitative traits.\n", "- **n_features:** The number of features or distinct traits that can be used to describe each\n", " item in a quantitative manner. Features are generally real-valued, but may be boolean or\n", " discrete-valued in some cases.\n", "\n", "The number of features must be fixed in advance. However it can be very high dimensional\n", "(e.g. millions of features) with most of them being zeros for a given sample. This is a case\n", "where `scipy.sparse` matrices can be useful, in that they are\n", "much more memory-efficient than numpy arrays.\n", "\n", "# Example: Iris morphometrics\n", "\n", "One of the datasets included with `scikit-learn` is a set of measurements for flowers, each being a member of one of three species: *Iris Setosa*, *Iris Versicolor* or *Iris Virginica*. \n", "\n", "![iris](images/iris.jpg)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import load_iris\n", "iris = load_iris()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data is stored as a `dict` with elements corresponding to the predictors (`data`), the species name (`target`), as well as labels associated with these values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.keys()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "n_samples, n_features = iris.data.shape\n", "n_samples, n_features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is a sample row of the data, with the corresponding labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.data[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.feature_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The information about the class of each sample is stored in the ``target`` attribute of the dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.target" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.target_names" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We probably want to convert the data into a more convenient structure, namely, a `DataFrame`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "iris_df = pd.DataFrame(iris.data, columns=iris.feature_names).assign(species=iris.target_names[iris.target])\n", "\n", "iris_df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Principal Component Analysis\n", "\n", "As an introductory application of machine learning methods, let's apply **principal components analysis** (PCA) to this dataset. Though we have 4 variables to work with, it appears that there is redundant information among them, so we might try to reduce the dimension of the problem, deriving a smaller number of latent variables that describe most of the overall variation in the dataset. For example, we might want 2 variables so that we can visualize differences among the species graphically.\n", "\n", "PCA is a transformation for identifying a set of latent variables corresponding to orthogonal components of variation. It locates a vector that describes an axis of largest variation in the hyperspace of the original variables. Then, conditional on the first vector, the algorithm picks out the next vector of maximum variation, but one which is orthogonal to the first component. It then identifies a third such orthogonal vector, and so on, up to the number of original variables in the dataset.\n", "\n", "Once we have this orthogonal set of variables, we can see if the smallest ones can be discarded without greatly reducing the amount of variation described by the remaining subset.\n", "\n", "Here is an illustraion using the iris dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import seaborn as sns\n", "\n", "sns.pairplot(iris_df, hue='species', height=1.5);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see, for example, that the petal variables appear to be redundant with respect to one another." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "What PCA will do is formulate a set of **orthogonal** varibles, where the number of orthogonal axes is smaller than the number of original variables. It then **projects** the original data onto these axes to obtain transformed variables. \n", "\n", "The key concept is that each set of axes constructed maximizes the amount of residual variability explained. \n", "\n", "We can then fit models to the subset of orthogonal variables that accounts for most of the variability.\n", "\n", "Let's do a PCA by hand first, before using scikit-learn:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Standardization\n", "\n", "An important first step for many datasets is to **standardize** the original data. Its important for all variables to be on the same scale because the PCA algorithm will be seeking to maximize variance along each axis. If one variable is numerically larger than another variable, it will tend to have larger variance, and will therefore garner undue attention from the algorithm. \n", "\n", "This dataset is approximately on the same scale, though there are differences, particularly in the fourth variable (petal width):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris.data[:5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's apply a standardization transformation using the preprocessing capabilities of scikit-learn:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.preprocessing import StandardScaler\n", "\n", "X_std = StandardScaler().fit_transform(iris.data)\n", "X_std[:5]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Eigendecomposition\n", "\n", "The PCA algorithm is driven by the eigenvalues and eigenvectors of the original dataset. \n", "\n", "- The eigenvectors determine the direction of each component\n", "- The eigenvalues determine the length (magnitude) of the component\n", "\n", "The eigendecomposition is performed on the covariance matrix of the data, which we can derive here using NumPy." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "Sigma = np.cov(X_std.T)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "evals, evecs = np.linalg.eig(Sigma)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "evals" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "from mpl_toolkits.mplot3d import Axes3D\n", "from mpl_toolkits.mplot3d import proj3d\n", "from matplotlib.patches import FancyArrowPatch\n", "\n", "variables = [name[:name.find(' (')]for name in iris.feature_names]\n", "\n", "class Arrow3D(FancyArrowPatch):\n", " def __init__(self, xs, ys, zs, *args, **kwargs):\n", " FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n", " self._verts3d = xs, ys, zs\n", " \n", " def do_3d_projection(self, renderer=None):\n", " xs3d, ys3d, zs3d = self._verts3d\n", " xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, self.axes.M)\n", " self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n", "\n", " return np.min(zs)\n", "\n", "fig = plt.figure(figsize=(7,7))\n", "ax = fig.add_subplot(111, projection='3d')\n", "\n", "ax.plot(X_std[:,0], X_std[:,1], X_std[:,2], 'o', markersize=8, \n", " color='green', \n", " alpha=0.2)\n", "\n", "mean_x, mean_y, mean_z = X_std.mean(0)[:-1]\n", "ax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5)\n", "for v in evecs:\n", " a = Arrow3D([mean_x, v[0]], [mean_y, v[1]], [mean_z, v[2]], mutation_scale=20, lw=3, arrowstyle=\"-|>\", color=\"r\")\n", " # ax.add_artist(a)\n", "ax.set_xlabel(variables[0])\n", "ax.set_ylabel(variables[1])\n", "ax.set_zlabel(variables[2])\n", "\n", "plt.title('Eigenvectors')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Selecting components\n", "\n", "The eigenvectors are the principle components, which are normalized linear combinations of the original features. They are ordered, in terms of the amount of variation in the dataset that they account for." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, axes = plt.subplots(2, 1)\n", "\n", "total = evals.sum()\n", "variance_explained = 100* np.sort(evals)[::-1]/total\n", "\n", "axes[0].bar(range(4), variance_explained)\n", "axes[0].set_xticks(range(4));\n", "axes[0].set_xticklabels(['Component ' + str(i+1) for i in range(4)])\n", "\n", "axes[1].plot(range(5), np.r_[0, variance_explained.cumsum()])\n", "axes[1].set_xticks(range(5));\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Projecting the data\n", "\n", "The next step is to **project** the original data onto the orthogonal axes.\n", "\n", "Let's extract the first two eigenvectors and use them as the projection matrix for the original (standardized) variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "W = evecs[:, :2]\n", "Y = X_std @ W" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_proj = pd.DataFrame(np.hstack((Y, iris.target.astype(int).reshape(-1, 1))),\n", " columns=['Component 1', 'Component 2', 'Species'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sns.lmplot(x='Component 1', y='Component 2',\n", " data=df_proj,\n", " fit_reg=False,\n", " hue='Species');" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The PCA procedure is implemented in `scikit-learn` in its `decompoisition` library:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.decomposition import PCA\n", "\n", "pca = PCA(n_components=2, whiten=True).fit(iris.data)\n", "X_pca = pca.transform(iris.data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By convention, `scikit-learn` exposes values estimated or calculated by its methods as public attributes on the model itself, with names appended with an underscore.\n", "\n", "Inspecting the `explained_variance_ratio_` attribute for our model, we see that the first two components explain more than 97% of the variation in the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pca.explained_variance_ratio_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "iris_df['First Component'] = X_pca[:, 0]\n", "iris_df['Second Component'] = X_pca[:, 1]\n", "\n", "sns.lmplot(x='First Component', y='Second Component', \n", " data=iris_df, \n", " fit_reg=False, \n", " hue=\"species\");" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `scikit-learn` interface\n", "\n", "All objects within scikit-learn share a uniform common basic API consisting of three complementary interfaces: \n", "\n", "* **estimator** interface for building and fitting models\n", "* **predictor** interface for making predictions\n", "* **transformer** interface for converting data.\n", "\n", "The estimator interface is at the core of the library. It defines instantiation mechanisms of objects and exposes a fit method for learning a model from training data. All supervised and unsupervised learning algorithms (*e.g.*, for classification, regression or clustering) are offered as objects implementing this interface. Machine learning tasks like feature extraction, feature selection or dimensionality reduction are also provided as estimators.\n", "\n", "The consistent interface across machine learning methods makes it easy to switch between different approaches without drastically changing the form of the data or the supporting code, making experimentation and prototyping fast and easy.\n", "\n", "Scikit-learn strives to have a uniform interface across all methods. For example, a typical **estimator** follows this template:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class Estimator(object):\n", " \n", " def fit(self, X, y=None):\n", " \"\"\"Fit model to data X (and y)\"\"\"\n", " self.some_attribute = self.some_fitting_method(X, y)\n", " return self\n", " \n", " def predict(self, X_test):\n", " \"\"\"Make prediction based on passed features\"\"\"\n", " pred = self.make_prediction(X_test)\n", " return pred" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a given scikit-learn **estimator** object (let's call it `model`), several methods are available. Irrespective of the type of **estimator**, there will be a `fit` method:\n", "\n", "- `model.fit` : fit training data. For supervised learning applications, this accepts two arguments: the data `X` and the labels `y` (e.g. `model.fit(X, y)`). For unsupervised learning applications, this accepts only a single argument, the data `X` (e.g. `model.fit(X)`).\n", "\n", "> During the fitting process, the state of the **estimator** is stored in attributes of the estimator instance named with a trailing underscore character (\\_). For example, the sequence of regression trees `sklearn.tree.DecisionTreeRegressor` is stored in `estimators_` attribute.\n", "\n", "The **predictor** interface extends the notion of an estimator by adding a `predict` method that takes an array `X_test` and produces predictions based on the learned parameters of the estimator. In the case of supervised learning estimators, this method typically returns the predicted labels or values computed by the model. Some unsupervised learning estimators may also implement the predict interface, such as k-means, where the predicted values are the cluster labels.\n", "\n", "**supervised estimators** are expected to have the following methods:\n", "\n", "- `model.predict` : given a trained model, predict the label of a new set of data. This method accepts one argument, the new data `X_new` (e.g. `model.predict(X_new)`), and returns the learned label for each object in the array.\n", "- `model.predict_proba` : For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by `model.predict()`.\n", "- `model.score` : for classification or regression problems, most (all?) estimators implement a score method. Scores are between 0 and 1, with a larger score indicating a better fit.\n", "\n", "Since it is common to modify or filter data before feeding it to a learning algorithm, some estimators in the library implement a **transformer** interface which defines a `transform` method. It takes as input some new data `X_test` and yields as output a transformed version. Preprocessing, feature selection, feature extraction and dimensionality reduction algorithms are all provided as transformers within the library.\n", "\n", "**unsupervised estimators** will always have these methods:\n", "\n", "- `model.transform` : given an unsupervised model, transform new data into the new basis. This also accepts one argument `X_new`, and returns the new representation of the data based on the unsupervised model.\n", "- `model.fit_transform` : some estimators implement this method, which more efficiently performs a fit and a transform on the same input data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Regression Analysis\n", "\n", "To demonstrate how `scikit-learn` is used, let's conduct a logistic regression analysis on a dataset for very low birth weight (VLBW) infants.\n", "\n", "Data on 671 infants with very low (less than 1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center by [OShea *et al.* (1992)](http://www.ncbi.nlm.nih.gov/pubmed/1635885). Of interest is the relationship between the outcome intra-ventricular hemorrhage and the predictors birth weight, gestational age, presence of pneumothorax, mode of delivery, single vs. multiple birth, and whether the birth occurred at Duke or at another hospital with later transfer to Duke. A secular trend in the outcome is also of interest.\n", "\n", "The metadata for this dataset can be found [here](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/Cvlbw.html)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "DATA_URL = 'https://raw.githubusercontent.com/fonnesbeck/Bios8366/master/data/'\n", "\n", "try:\n", " vlbw = pd.read_csv(\"../data/vlbw.csv\", index_col=0)\n", "except FileNotFoundError:\n", " vlbw = pd.read_csv(DATA_URL + \"vlbw.csv\", index_col=0)\n", " \n", "\n", "subset = vlbw[['ivh', 'gest', 'bwt', 'delivery', 'inout', \n", " 'pltct', 'lowph', 'pneumo', 'twn', 'apg1']].dropna()\n", "\n", "# Extract response variable\n", "y = subset.ivh.replace({'absent':0, 'possible':1, 'definite':1})\n", "\n", "# Standardize some variables\n", "X = subset[['gest', 'bwt', 'pltct', 'lowph']]\n", "X0 = (X - X.mean(axis=0)) / X.std(axis=0)\n", "\n", "# Recode some variables\n", "X0['csection'] = subset.delivery.replace({'vaginal':0, 'abdominal':1})\n", "X0['transported'] = subset.inout.replace({'born at Duke':0, 'transported':1})\n", "X0[['pneumo', 'twn', 'apg1']] = subset[['pneumo', 'twn','apg1']]\n", "X0.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first split the data into a training set and a testing set. By default, 25% of the data is reserved for testing. This is the first of multiple ways that we will see to do this." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "X_train, X_test, y_train, y_test = train_test_split(X0, y)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `LogisticRegression` model in scikit-learn employs a regularization coefficient `C`, which defaults to 1. The amount of regularization is lower with larger values of C.\n", "\n", "Regularization penalizes the values of regression coefficients, while smaller ones let the coefficients range widely. Scikit-learn includes two penalties: a **l2** penalty which penalizes the sum of the squares of the coefficients (the default), and a **l1** penalty which penalizes the sum of the absolute values.\n", "\n", "The reason for doing regularization is to let us to include more covariates than our data might otherwise allow. We only have a few coefficients, so we will set `C` to a large value." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression\n", "\n", "lrmod = LogisticRegression(C=1000, solver='lbfgs')\n", "lrmod" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `__repr__` method of `scikit-learn` models prints out all of the hyperparameter values used when running the model. It is recommended to inspect these prior to running the model, as the can strongly influence the resulting estimates and predictions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lrmod.fit(X_train, y_train)\n", "\n", "pred_train = lrmod.predict(X_train)\n", "pred_test = lrmod.predict(X_test)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.crosstab(y_train, pred_train, \n", " rownames=[\"Actual\"], colnames=[\"Predicted\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pd.crosstab(y_test, pred_test, \n", " rownames=[\"Actual\"], colnames=[\"Predicted\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The regression coefficients can be inspected in the `coef_` attribute that the fitting procedure attached to the model object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for name, value in zip(X0.columns, lrmod.coef_[0]):\n", " print('{0}:\\t{1:.2f}'.format(name, value))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`scikit-learn` does not calculate confidence intervals for the model coefficients, but we can bootstrap some in just a few lines of Python:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "\n", "n = 1000\n", "boot_samples = np.empty((n, len(lrmod.coef_[0])))\n", "\n", "for i in np.arange(n):\n", " boot_ind = np.random.randint(0, len(X0), len(X0))\n", " y_i, X_i = y.values[boot_ind], X0.values[boot_ind]\n", " \n", " lrmod_i = LogisticRegression(C=1000, solver='lbfgs')\n", " lrmod_i.fit(X_i, y_i)\n", "\n", " boot_samples[i] = lrmod_i.coef_[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "boot_samples.sort(axis=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "boot_se = boot_samples[[25, 975], :].T" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "coefs = lrmod.coef_[0]\n", "plt.plot(coefs, 'r.')\n", "for i in range(len(coefs)):\n", " plt.errorbar(x=[i,i], y=boot_se[i], color='red')\n", "plt.xlim(-0.5, 8.5)\n", "plt.xticks(range(len(coefs)), X0.columns.values, rotation=45)\n", "plt.axhline(0, color='k', linestyle='--')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## References\n", "\n", "- [`scikit-learn` user's guide](http://scikit-learn.org/stable/user_guide.html)\n", "- Vanderplas, J. (2016) [Python Data Science Handbook: Essential Tools for Working with Data](http://shop.oreilly.com/product/0636920034919.do). O'Reilly Media." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.9" }, "latex_envs": { "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 0 } }, "nbformat": 4, "nbformat_minor": 2 }