{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "UxK08xN_wco-" }, "source": [ "# HW5: Model Predictive Control\n", "\n", "> - Full Name: **[Full Name]**\n", "> - Student ID: **[Stundet ID]**\n", "\n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DeepRLCourse/Homework-5-Questions/blob/main/MPC.ipynb)\n", "[![Open In kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/DeepRLCourse/Homework-5-Questions/main/MPC.ipynb)\n", "\n", "## Overview\n", "Here the goal is to use **MPC** for [gymnasium environments](https://gymnasium.farama.org/).\n", "More specificly we focus on the [Pendulum](https://gymnasium.farama.org/environments/classic_control/pendulum/) environment and try to solve it using [mpc.pytorch](https://locuslab.github.io/mpc.pytorch/).\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "id": "SBD13rNwmXpx" }, "outputs": [], "source": [ "# @title Imports\n", "\n", "# Stuff you (might) need\n", "import random\n", "import numpy as np\n", "import gymnasium as gym\n", "\n", "import torch\n", "from torch import nn\n", "import torch.autograd\n", "from tqdm.notebook import trange\n", "import math\n", "\n", "# Stuff used for visualization\n", "from matplotlib import pyplot as plt\n", "from gymnasium.wrappers import RecordVideo\n", "import base64\n", "import imageio\n", "from IPython.display import HTML\n", "import warnings\n", "warnings.filterwarnings(\"ignore\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "cellView": "form", "id": "Avcbl8VMmgm1" }, "outputs": [], "source": [ "# @title Visualization Functions\n", "\n", "def embed_mp4(filename):\n", " video = open(filename,'rb').read()\n", " b64 = base64.b64encode(video)\n", " tag = '''\n", " '''.format(b64.decode())\n", "\n", " return HTML(tag)\n", "\n", "\n", "def plot_results(rewards, actions):\n", " plt.plot(rewards, label='Rewards')\n", " plt.plot(actions, label='Actions')\n", " plt.legend()\n", " plt.title(f\"Total reward: {sum(rewards):.2f}\")\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": { "id": "qTS3Lm1nWiMK" }, "source": [ "# Explore the Environment (25 points)" ] }, { "cell_type": "markdown", "metadata": { "id": "yyCz0lxFWsFv" }, "source": [ "To better understand the environment, let's first see what a random agent does." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 533, "referenced_widgets": [ "c48f9ad4e3214bbe8d7518441bd56339", "8fca041fb7a54f198e86951c5468434b", "e31240662e47413f93d3db20b230b701", "884ea4976a7543d38937c8c8a8ca6414", "e4ef931db4dd47dd855291b4d4cfe584", "4d1a6ace29c043b191691c182e1e1fd1", "cfc793ef3ee648bbb3b77d9ccfd0e8ea", "300f0a5bade84684942da27934e1e0b5", "d51aaee7e0874f8c98b468dfce881fc6", "a9c7183dcac64c7dba06525a831bf135", "318ed47730304069a2978696d0e58fd7" ] }, "id": "xZI6WO4bmgjS", "outputId": "65a6a700-b1f4-4940-b9c8-b32a9a7f2558" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c48f9ad4e3214bbe8d7518441bd56339", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/500 [00:00\n", " \n", " Your browser does not support the video tag.\n", " " ], "text/plain": [ "" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Initialize the pendulum environment with video recording enabled\n", "env = gym.make('Pendulum-v1', render_mode='rgb_array')\n", "\n", "# Create a directory to save the video\n", "video_directory = \"random_videos\"\n", "env = RecordVideo(env, video_directory)\n", "\n", "# Set the number of steps to record\n", "num_steps = 500\n", "\n", "# TODO: Reset the environment to get the initial state\n", "\n", "\n", "for _ in (pbar := trange(num_steps)):\n", " # TODO: Sample a random action\n", "\n", "\n", " # TODO: Step the environment\n", "\n", "\n", " # TODO: Render the environment\n", "\n", "\n", " # TODO: If done reset and get new state\n", "\n", "\n", " pbar.set_description(f'Action = {action[0]:.2f} | Reward = {reward:.2f}')\n", "\n", "# Close the environment to finalize the video\n", "env.close()\n", "\n", "# Show the video\n", "embed_mp4(f'{video_directory}/rl-video-episode-0.mp4')" ] }, { "cell_type": "markdown", "metadata": { "id": "-8SUlhUHWvbC" }, "source": [ "The goal of the [Pendulum](https://gymnasium.farama.org/environments/classic_control/pendulum/) environment in [Gymnasium](https://gymnasium.farama.org/) is to swing a pendulum to an upright position and keep it balanced there.\n", "In this environment, you control a torque that can be applied to the pendulum.\n", "The objective is to apply the right amount of torque to swing the pendulum up and maintain its upright position." ] }, { "cell_type": "markdown", "metadata": { "id": "wIGVVtRU2kY4" }, "source": [ "## Simulation Tools" ] }, { "cell_type": "markdown", "metadata": { "id": "s7oVXleZ2rY8" }, "source": [ "Both the `angle_normalize` function and the `PendulumDynamics` class are fundamental components for accurately simulating, analyzing, and controlling the pendulum system.\n", "They ensure consistency in angle representation and provide a realistic model of the pendulum's behavior, enabling effective control strategies.\n", "\n", "We use the `angle_normalize` function for:\n", "\n", "* **Consistency**: When dealing with angles, it's important to keep them within a standard range to ensure consistency in calculations.\n", "* **Handling Wrapping**: Angles can wrap around when they exceed $2\\pi$ or drop below $-2\\pi$. Normalizing angles helps avoid confusion and errors that can arise from angle wrapping.\n", "\n", "\n", "And we use the `PendulumDynamics` class for:\n", "\n", "* **Modeling Physical Behavior**: The `PendulumDynamics` class models the physical behavior of the pendulum.\n", "* **Simulation and Control**: This class allows us to simulate the pendulum's response to different actions, which is crucial for designing and testing control algorithms.\n", "* **Optimization**: Understanding the dynamics of the pendulum helps in optimizing the control inputs. The class encapsulates the physics involved, enabling us to apply control techniques like Model Predictive Control (MPC) to achieve the desired behavior.\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "id": "N9y8iJ0u2Lgp" }, "outputs": [], "source": [ "class PendulumDynamics(nn.Module):\n", " def forward(self, state, action):\n", " th = state[:, 0].view(-1, 1)\n", " thdot = state[:, 1].view(-1, 1)\n", "\n", " g = 10 # default value of the environment (not 9.81)\n", " m = 1\n", " l = 1\n", " dt = 0.05\n", "\n", " u = action\n", " u = torch.clamp(u, -2, 2)\n", "\n", " newthdot = thdot + (-3 * g / (2 * l) * torch.sin(th + np.pi) + 3. / (m * l ** 2) * u) * dt\n", " newth = th + newthdot * dt\n", " newthdot = torch.clamp(newthdot, -8, 8)\n", "\n", " state = torch.cat((angle_normalize(newth), newthdot), dim=1)\n", " return state\n", "\n", "\n", "def angle_normalize(x):\n", " return (((x + math.pi) % (2 * math.pi)) - math.pi)" ] }, { "cell_type": "markdown", "metadata": { "id": "eSb9nKlw4Gra" }, "source": [ "# Model Predictive Control (50 points)" ] }, { "cell_type": "markdown", "metadata": { "id": "yc8yKVti4tMp" }, "source": [ "[mpc.pytorch](https://locuslab.github.io/mpc.pytorch/) is a library that provides a fast and differentiable [Model Predictive Control](https://en.wikipedia.org/wiki/Model_predictive_control) (MPC) solver for PyTorch. It was developed by researchers at [LocusLab](https://locuslab.github.io/) and is designed to integrate seamlessly with PyTorch, allowing for efficient and flexible control of dynamic systems.\n", "\n", "If you are interested to learn more, check out [OptNet](https://arxiv.org/abs/1703.00443) and [Differentiable MPC](https://arxiv.org/abs/1810.13400)." ] }, { "cell_type": "markdown", "metadata": { "id": "PfDVMqgi5kax" }, "source": [ "## Quick Setup" ] }, { "cell_type": "markdown", "metadata": { "id": "qU9dpT0C5nFg" }, "source": [ "In order to install this library you can use `pip`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HEyJ0SPi4W28" }, "outputs": [], "source": [ "! pip install mpc" ] }, { "cell_type": "markdown", "metadata": { "id": "J9Ev1gLC5uGd" }, "source": [ "While `mpc` offers a lot, in this notebook we are going to focus only on the core features.\n", "To learn more checkout the [GitHub repository](https://github.com/locuslab/mpc.pytorch) of this project." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "id": "euuwSTJ14WO-" }, "outputs": [], "source": [ "from mpc import mpc" ] }, { "cell_type": "markdown", "metadata": { "id": "tf0ALVu7Ggsh" }, "source": [ "## The Cost Function" ] }, { "cell_type": "markdown", "metadata": { "id": "l9pgPno8GjFV" }, "source": [ "The `define_swingup_goal` function creates a cost function that the MPC algorithm uses to determine the optimal control actions to achieve the desired pendulum swing-up task.\n", "It considers both the desired state (upright and stationary) and penalizes large control inputs to ensure smooth control actions." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "id": "13hsSR8cGLzA" }, "outputs": [], "source": [ "def define_swingup_goal():\n", " goal_weights = torch.tensor((1., 0.1)) # Weights for theta and theta_dot\n", " goal_state = torch.tensor((0., 0.)) # Desired state (theta=0, theta_dot=0)\n", " ctrl_penalty = 0.001\n", " q = torch.cat((goal_weights, ctrl_penalty * torch.ones(1))) # Combined weights\n", " px = -torch.sqrt(goal_weights) * goal_state\n", " p = torch.cat((px, torch.zeros(1)))\n", " Q = torch.diag(q).repeat(TIMESTEPS, N_BATCH, 1, 1) # Cost matrix\n", " p = p.repeat(TIMESTEPS, N_BATCH, 1)\n", " return mpc.QuadCost(Q, p) # Quadratic cost" ] }, { "cell_type": "markdown", "metadata": { "id": "4ILZGtcdLeFb" }, "source": [ "## Running MPC" ] }, { "cell_type": "markdown", "metadata": { "id": "JskzIJuLMpoe" }, "source": [ "To run the MPC, in each iteration:\n", "\n", "1. First you obtain the current state of the environment and convert it to a tensor.\n", "2. Then you recreate the MPC controller using the updated `u_init` and calculate the optimal control actions based on the current state, dynamics, and cost function.\n", "3. Next you take the first planned action and update `u_init` with the remaining actions.\n", "4. Finally you take a step in the environment and store the rewards and actions.\n", "\n", "Remember that `u_init` serves as the initial guess for the control inputs." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tE45H5NI05Q_" }, "outputs": [], "source": [ "# Select the hyperparameters\n", "RUN_ITER = ...\n", "TIMESTEPS = ...\n", "N_BATCH = ...\n", "LQR_ITER = ...\n", "\n", "\n", "# Setup the environmnet\n", "env = gym.make('Pendulum-v1', render_mode='rgb_array')\n", "video_directory = \"mpc_videoss\"\n", "env = RecordVideo(env, video_directory)\n", "env.reset() # Reset the underlying environment\n", "env.unwrapped.state = [np.pi, 1] # Environment must start in downward position\n", "\n", "\n", "# Define the cost function and initialize u\n", "cost = define_swingup_goal()\n", "u_init = None\n", "\n", "\n", "# Run MPC\n", "rewards, actions = [], []\n", "for _ in (pbar := trange(RUN_ITER)):\n", " state = env.unwrapped.state.copy()\n", " state = torch.tensor(state).view(1, -1)\n", " # recreate controller using updated u_init (kind of wasteful right?)\n", " ctrl = mpc.MPC(2, 1, TIMESTEPS, u_lower=-2.0, u_upper=+2.0,\n", " lqr_iter=LQR_ITER, exit_unconverged=False, eps=1e-2,\n", " n_batch=N_BATCH, backprop=False, verbose=0, u_init=u_init,\n", " grad_method=mpc.GradMethods.AUTO_DIFF)\n", "\n", " # compute action based on current state, dynamics, and cost\n", " nominal_states, nominal_actions, nominal_objs = ctrl(state, cost, PendulumDynamics())\n", "\n", " # TODO: Take first planned action\n", " action = ...\n", "\n", " َ# Update u_init\n", " u_init = torch.cat((nominal_actions[1:], torch.zeros(1, N_BATCH, 1)), dim=0)\n", "\n", " # TODO: Take a step in the environment\n", "\n", "\n", " # TODO: Store the latest action and reward\n", "\n", "\n", " pbar.set_description(f\"Action = {actions[-1]:.2f} | Reward = {rewards[-1]:.2f}\")\n", " env.render()\n", "\n", "env.close()\n", "\n", "# Plot the results\n", "plot_results(rewards, actions)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 501 }, "id": "4FX9ANmENtUO", "outputId": "4a8b39a3-7499-481f-c9b8-fd5880df3563" }, "outputs": [ { "data": { "text/html": [ "\n", " " ], "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Show the policy learned\n", "embed_mp4(f'{video_directory}/rl-video-episode-0.mp4')" ] }, { "cell_type": "markdown", "metadata": { "id": "jskFvBFHLido" }, "source": [ "# Questions (25 points)" ] }, { "cell_type": "markdown", "metadata": { "id": "9Pk2jwFVNzTZ" }, "source": [ "Based on your experiments, answer the following questions:\n", "\n", "\n", "\n", "* How does the number of LQR iterations affect the MPC?\n", "* What if we didn't have access to the model dynamics? Could we still use MPC?\n", "* Do `TIMESTEPS` or `N_BATCH` matter here? Explain.\n", "* Why do you think we chose to set the initial state of the environment to the downward position?\n", "* As time progresses (later iterations) what happens to the actions and rewards? Why?\n", "\n", "`Your Answers:`\n" ] } ], "metadata": { "colab": { "collapsed_sections": [ "UxK08xN_wco-", "qTS3Lm1nWiMK", "ErwP00DkW_L6", "GJHiYFIiX6OJ" ], "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "name": "python" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "300f0a5bade84684942da27934e1e0b5": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "318ed47730304069a2978696d0e58fd7": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "4d1a6ace29c043b191691c182e1e1fd1": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "884ea4976a7543d38937c8c8a8ca6414": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_a9c7183dcac64c7dba06525a831bf135", "placeholder": "​", "style": "IPY_MODEL_318ed47730304069a2978696d0e58fd7", "value": " 500/500 [00:13<00:00, 93.26it/s]" } }, "8fca041fb7a54f198e86951c5468434b": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HTMLModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HTMLModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HTMLView", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_4d1a6ace29c043b191691c182e1e1fd1", "placeholder": "​", "style": "IPY_MODEL_cfc793ef3ee648bbb3b77d9ccfd0e8ea", "value": "Action = -0.83 | Reward = -7.56: 100%" } }, "a9c7183dcac64c7dba06525a831bf135": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "c48f9ad4e3214bbe8d7518441bd56339": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "HBoxModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "HBoxModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "HBoxView", "box_style": "", "children": [ "IPY_MODEL_8fca041fb7a54f198e86951c5468434b", "IPY_MODEL_e31240662e47413f93d3db20b230b701", "IPY_MODEL_884ea4976a7543d38937c8c8a8ca6414" ], "layout": "IPY_MODEL_e4ef931db4dd47dd855291b4d4cfe584" } }, "cfc793ef3ee648bbb3b77d9ccfd0e8ea": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "DescriptionStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "DescriptionStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "description_width": "" } }, "d51aaee7e0874f8c98b468dfce881fc6": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "ProgressStyleModel", "state": { "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "ProgressStyleModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "StyleView", "bar_color": null, "description_width": "" } }, "e31240662e47413f93d3db20b230b701": { "model_module": "@jupyter-widgets/controls", "model_module_version": "1.5.0", "model_name": "FloatProgressModel", "state": { "_dom_classes": [], "_model_module": "@jupyter-widgets/controls", "_model_module_version": "1.5.0", "_model_name": "FloatProgressModel", "_view_count": null, "_view_module": "@jupyter-widgets/controls", "_view_module_version": "1.5.0", "_view_name": "ProgressView", "bar_style": "success", "description": "", "description_tooltip": null, "layout": "IPY_MODEL_300f0a5bade84684942da27934e1e0b5", "max": 500, "min": 0, "orientation": "horizontal", "style": "IPY_MODEL_d51aaee7e0874f8c98b468dfce881fc6", "value": 500 } }, "e4ef931db4dd47dd855291b4d4cfe584": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } } } } }, "nbformat": 4, "nbformat_minor": 0 }