{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# This notebook present the most basic use of Grid2Op" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Objectives**\n", "\n", "This notebook will cover some basic raw functionality at first. It will then show how these raw functionalities are encapsulated with easy to use functions.\n", "\n", "The recommended way to use these is to through the Runner, and not by getting through the instanciation of class one by one." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import grid2op" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Impossible to automatically add a menu / table of content to this notebook.\n", "You can download \"jyquickhelper\" package with: \n", "\"pip install jyquickhelper\"\n" ] } ], "source": [ "res = None\n", "try:\n", " from jyquickhelper import add_notebook_menu\n", " res = add_notebook_menu()\n", "except ModuleNotFoundError:\n", " print(\"Impossible to automatically add a menu / table of content to this notebook.\\nYou can download \\\"jyquickhelper\\\" package with: \\n\\\"pip install jyquickhelper\\\"\")\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0) Summary of RL method" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Though the `Grid2Op` package can be used to perform many different tasks, these set of notebooks will be focused on the machine learning part, and its usage in a Reinforcement learning framework. \n", "\n", "The reinforcement learning is a framework that allows to train \"agent\" to solve time dependant domain. We tried to cast the grid operation planning into this framework. The package `Grid2Op` was inspired by it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In a reinforcement learning (RL), there are 2 distinct entities:\n", "* **Environment**: is a modeling of the \"world\" in which the *agent* takes some *actions* to achieve some pre definite objectives.\n", "* **Agent**: will do actions on the environment that will have consequences.\n", "\n", "These 2 entities exchange 3 main type of information:\n", "* **Action**: it's an information sent by the Agent that will modify the internal state of the environment.\n", "* **State** / **Observation**: is the (partial) view of the environment by the Agent. The Agent receive a new state after each actions. He can use the observation (state) at time step *t* to take the action at time *t*.\n", "* **Reward**: is the score received by the agent for the previous action.\n", "\n", "A schematic representaiton of this is shown in the figure bellow (Credit: [Sutton & Barto](http://incompleteideas.net/book/bookdraft2017nov5.pdf)):" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![title](img/reinforcement-learning.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we will develop a simple Agent that takes some action (powerline disconnection) based on the observation of the environment." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more information about the problem, please visit the [Example_5bus](Example_5bus.ipynb) notebook which dive more into the casting of the real time operation planning into a RL framework. Note that this notebook is still under development at the moment.\n", "\n", "A good material is also provided in the white paper [Reinforcement Learning for Electricity Network Operation](https://arxiv.org/abs/2003.07339) presented for the L2RPN 2020 Neurips edition." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## I) Creating an Environement: Step by Step explanation of the basic classes of this package" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I.A) Get Data to feed the powergrid" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to be initialized, an Agent need to know in which space he operates. For that, we load an Environement, based on the IEEE case14.\n", "\n", "An example of this powergrid can be found in the package data. We import them here:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "powergrid_path = grid2op.CASE_14_FILE\n", "multi_episode_path = grid2op.CHRONICS_MLUTIEPISODE\n", "names_chronics_to_backend = grid2op.NAMES_CHRONICS_TO_BACKEND\n", "max_iter = 10" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***NB*** In order to work smoothly, the backend and the data in the files need to have the exact same name. As often the process to generate the data (we suppose that the equilibrium between productions and load is performed beforehand) is different and agnostic from the powergrid, it is not surpising to have the same physical object with different name in the data for the temporal series and for the powergrid file description. In order to simplify the matching, in this example we use the mapper `names_chronics_to_backend` that is able to \"convert\" the names given in the data to the names of the objects in the powergrid description file. It's a mapper that makes the link between the two.\n", "\n", "More detail about how the matching is performed can be found in the help of the *ChronicsHandler.GridValue.initialize* method [here](https://grid2op.readthedocs.io/en/latest/chronics.html#grid2op.Chronics.GridValue.initialize) or in the file [ChronicsHandler.py](../grid2op/Chronics/ChronicsHandler.py)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to work, an *Environement* need to be fed with data. These data can be read from files for example. Some examples are also provided in this package. We import them:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from grid2op.Chronics import ChronicsHandler, Multifolder, GridStateFromFileWithForecasts\n", "data_feeding = ChronicsHandler(chronicsClass=Multifolder,\n", " path=multi_episode_path,\n", " gridvalueClass=GridStateFromFileWithForecasts,\n", " max_iter=max_iter)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`data_feeding` is now an instance that will automatically load the data and modify the powergrid accordingly at each time step. The process of reading is handled by this class, but the process of modifying the underlying powergrid is carried out by the *Environment* and performed by the *Backend*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I.B) Get a Backend to carry out the computations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A *Backend* is a dedicated object that has the responsability to compute the resulting powerflow from given injections (poductions and loads). The possibility to implement your own Backend make the Grid2Op framework completely agnostic of the modeling of the powergrid you want to use, or the method to solve the powerflow.\n", "\n", "A implementation of a Backend is provided with Pandapower." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from grid2op.Backend import PandaPowerBackend\n", "backend = PandaPowerBackend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`backend` is now a variable that is able to compute powerflow and is able to emulate cascading failures. To be able to work properly and carry out the right computations, the worker need to be aware of some *Parameters*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I.C) Getting the parameters of the game" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this example, we will use the default parameters available. More information about the parameters that can be modified can be found in the help [here](https://grid2op.readthedocs.io/en/latest/parameters.html), or in the file [Parameters.py](grid2op/Parameters.py)." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from grid2op.Parameters import Parameters\n", "param = Parameters()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Note on the parameters**: some parameters here have direct influence on the difficulty of the game. For example: \"NO_OVERFLOW_DISCONNECTION\" member of this class, if set to ``True`` (default is ``False``) will not trigger the automatic disconnection of powerline when they are in overflow. While some other parameters can have a influence on the speed with which some timesteps will be computed. This is for example the case with the member \"ENV_DC\". If this member \"ENV_DC\" is set to ``True`` a faster (but less accurate) computation engine will be used to compute the flows. This might be usefull for example at the beginning of training.\n", "\n", "To set the paramters, you can do, for example:\n", "\n", "```python\n", "from grid2op.Parameters import Parameters\n", "param = Parameters()\n", "param.from_dict({\"NO_OVERFLOW_DISCONNECTION\": True, \"ENV_DC\": True})\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I.D) Building the Environment" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from grid2op.Environment import Environment\n", "env = Environment(init_grid_path=powergrid_path,\n", " chronics_handler=data_feeding,\n", " backend=backend,\n", " parameters=param,\n", " names_chronics_to_backend=names_chronics_to_backend)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Creating an *Environment* will load the powergrid (*powergrid_path*), load the data to feed it (*data_feeding*), network powerflow simulator (*backend*) and game settings (*param*. Use the first row to initialize the powergrid and some other internal checking that the data are suited to the powergrid.\n", "\n", "It can take a moment (usually a few seconds) to load it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This environment can be greatly customize. We expose here only basic functionnalities. For more information it is advised to read the documentation [here](https://grid2op.readthedocs.io/en/latest/environment.html) (if it has been built locally), to consult the official documentation on the internet or to consult the source code of [Environment.py](grid2op/Environment/Environment.py)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### I.E) A shortcut to perfom all that" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All of the above can be done in calling one function that will handle the creation of the Environment with default values.\n", "\n", "This particular above mentioned environment is named `case14_fromfile`.\n", "\n", "To define/create it, we can call:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "env = grid2op.make(\"case14_fromfile\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## II) Creating an Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An *Agent* is the name given to the \"operator\" / \"bot\" / \"algorithm\" that will perform some modification of the powergrid when he faces some \"observation\".\n", "\n", "Some example of Agents are provided in the file [Agent.py](grid2op/Agent/Agent.py).\n", "\n", "A deeper look at the different Agent provided can be found in the [4_StudyYourAgent](4_StudyYourAgent.ipynb.ipynb) notebook (in progress). We suppose here we use the most simple Agent, the one that does nothing" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from grid2op.Agent import DoNothingAgent\n", "my_agent = DoNothingAgent(env.helper_action_player)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## III) Assess how the Agent is performing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The performance of each Agent is assessed with the reward. For this example, the reward is a *FlatReward* that just computes how many times step the *Agent* has sucessfully managed before breaking any rules. For more control on this reward, it is recommended to use look at the document of the Environment class.\n", "\n", "More example of rewards are also available on the official document or [here](https://grid2op.readthedocs.io/en/latest/reward.html)." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'reward' is not defined", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mobs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0menv\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreset\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32mwhile\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mdone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mact\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmy_agent\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mact\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mobs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreward\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdone\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# chose an action to do, in this case \"do nothing\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0mobs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreward\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minfo\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0menv\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstep\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mact\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# implement this action on the powergrid\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0mcum_reward\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0mreward\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mNameError\u001b[0m: name 'reward' is not defined" ] } ], "source": [ "done = False\n", "time_step = int(0)\n", "cum_reward = 0.\n", "obs = env.reset()\n", "reward = env.reward_range[0]\n", "while not done:\n", " act = my_agent.act(obs, reward, done) # chose an action to do, in this case \"do nothing\"\n", " obs, reward, done, info = env.step(act) # implement this action on the powergrid\n", " cum_reward += reward\n", " time_step += 1\n", " if time_step > max_iter:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now evaluate how well this *agent* is performing:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"This agent managed to survive {} timesteps\".format(time_step))\n", "print(\"It's final cumulated reward is {}\".format(cum_reward))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NB** To the function \"make\" is highly customizable. For example, you can change the reward you are using to train your agent this way:\n", "\n", "```python\n", "from grid2op.Reward import L2RPNReward\n", "env = grid2op.make(action_class=my_agent,\n", " reward_class=L2RPNReward)\n", "```\n", "\n", "Because we thought using a single reward to train an agent in such a complex environment, we also gave the possibility to assess different rewards during training. This can be done with the following code:\n", "\n", "```python\n", "\n", "from grid2op.Reward import L2RPNReward, FlatReward\n", "env = grid2op.make(action_class=my_agent,\n", " reward_class=L2RPNReward,\n", " other_rewards={\"other_reward\" : FlatReward })\n", "```\n", "These result of these reward can be accessed in the \"info\" return value of the call to env.step. See the official document of reward [here](https://grid2op.readthedocs.io/en/latest/reward.html) for more information." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## IV) More convenient ways to perform all these operations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the above steps have been detailed as a \"quick start\", to give an example of the main classes of the Grid2Op package. Having to code all the above is quite tedious, but offers a lot of flexibility.\n", "\n", "Implementing all this before starting to evaluate an agent can be quite tedious. What we expose here is a much shorter way to perfom all of the above. In this section we will expose 2 ways:\n", "* The quickest way, using the grid2op.main API, most suited when basic computations need to be carried out.\n", "* The recommended way using a *Runner*, it gives more flexibilities than the grid2op.main API but can be harder to configure.\n", "\n", "For this section, we assume the same as before:\n", "* The Agent is \"Do Nothing\"\n", "* The Environment is the default Environment\n", "* PandaPower is used as the backend\n", "* The chronics comes from the files included in this package\n", "* etc." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IV.A) Using the grid2op.main API" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When only simple assessment need to be performed, the grid2op.main API is perfectly suited. This API can also be access with the command line:\n", "```bash\n", "python3 -m grid2op.main\n", "```\n", "\n", "We detail here its usage as an API, to assess the performance of a given Agent.\n", "\n", "As opposed to building en environment from scratch (see the previous section) this requires much less effort: we don't need to initialize (instanciate) anything. All is carried out inside the Runner called by the *main* function.\n", "\n", "We ask here 1 episode (eg. we play one scenario until: either the agent does a game over, or until the scenario ends). But this method would work as well if we more." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.main import main\n", "res = main(nb_episode=1,\n", " agent_class=DoNothingAgent,\n", " path_casefile=powergrid_path,\n", " path_chronics=multi_episode_path,\n", " names_chronics_to_backend=names_chronics_to_backend,\n", " gridStateclass_kwargs={\"gridvalueClass\": GridStateFromFileWithForecasts, \"max_iter\": max_iter}\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A call of the single 2 lines above will:\n", "* Create a valid environment\n", "* Create a valid agent\n", "* Assess how well an agent performs on one episode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"The results are:\")\n", "for chron_name, _, cum_reward, nb_time_step, max_ts in res:\n", " msg_tmp = \"\\tFor chronics located at {}\\n\".format(chron_name)\n", " msg_tmp += \"\\t\\t - cumulative reward: {:.2f}\\n\".format(cum_reward)\n", " msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n", " print(msg_tmp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is particularly suited to evaluate different agents, for example we can quickly evaluate a second agent. For the below example, we can import an agent class *PowerLineSwitch* whose job is to connect and disconnect the power lines in the power network. This *PowerLineSwitch* Agent will simulate the effect of disconnecting each powerline on the powergrid and take the best action found (its execution can take a long time, depending on the scenario and the amount of powerlines on the grid). **The execution of the code below can take a few moments**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.Agent import PowerLineSwitch\n", "res = main(nb_episode=1,\n", " agent_class=PowerLineSwitch,\n", " path_casefile=powergrid_path,\n", " path_chronics=multi_episode_path,\n", " names_chronics_to_backend=names_chronics_to_backend,\n", " gridStateclass_kwargs={\"gridvalueClass\": GridStateFromFileWithForecasts, \"max_iter\": max_iter}\n", " )\n", "print(\"The results are:\")\n", "for chron_name, _, cum_reward, nb_time_step, max_ts in res:\n", " msg_tmp = \"\\tFor chronics located at {}\\n\".format(chron_name)\n", " msg_tmp += \"\\t\\t - cumulative reward: {:.2f}\\n\".format(cum_reward)\n", " msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n", " print(msg_tmp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using this API it's also possible to store the results for a detailed examination of the aciton taken by the Agent. Note that writing on the hard drive has an overhead on the computation time.\n", "\n", "To do this, only a simple argument need to be added to the *main* function call. An example can be found below (where the outcome of the experiment will be stored in the `saved_experiment_donothing` directory):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = main(nb_episode=1,\n", " agent_class=DoNothingAgent,\n", " path_casefile=powergrid_path,\n", " path_chronics=multi_episode_path,\n", " names_chronics_to_backend=names_chronics_to_backend,\n", " gridStateclass_kwargs={\"gridvalueClass\": GridStateFromFileWithForecasts, \"max_iter\": max_iter},\n", " path_save=os.path.abspath(\"saved_experiment_donothing\")\n", " )\n", "print(\"The results are:\")\n", "for chron_name, _, cum_reward, nb_time_step, max_ts in res:\n", " msg_tmp = \"\\tFor chronics located at {}\\n\".format(chron_name)\n", " msg_tmp += \"\\t\\t - cumulative reward: {:.2f}\\n\".format(cum_reward)\n", " msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n", " print(msg_tmp)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!ls saved_experiment_donothing/1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the informations saved are showed above. For more information about them, please don't hesitate to read the documentation of [Runner](https://grid2op.readthedocs.io/en/latest/runner.html) (if compiled locally) or consult the [Runner.py](grid2op/Runner.py) file.\n", "\n", "**NB** A lot more of informations about *Action* is provided in the [2_Action_GridManipulation](2_Action_GridManipulation.ipynb) notebook. In the [3_TrainingAnAgent](3_TrainingAnAgent.ipynb) (last section) there is an quick example on how to read / write action from a saved repository." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### IV.B) Using the \"make\" function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By default, the grid2op framework come with a few pre defined environment, each with its own properties. Some are suitable for discrete control, some more for continuous control etc.\n", "\n", "Two environments can be used to get familiar with the platform. The first one is the \"case5_example\" that represents a tiny powergrid (with only 5 substations), the other one is the \"case14_redisp\" that is a transposition of the case14 ieee powergrid TODO describe it.\n", "\n", "They can both be created easily:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env_case5 = grid2op.make(\"case5_example\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env_case14 = grid2op.make(\"case14_redisp\")\n", "# more data for this environment are available as a github release . You can download them easily with:\n", "# python -m grid2op.download --name \"case14_redisp\" --path_save PATH\\WHERE\\YOU\\WANT\\TO\\DOWNLOAD\\DATA\n", "# from a command line" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.Runner import Runner\n", "from grid2op.Agent import DoNothingAgent\n", "runner = Runner(**env_case14.get_params_for_runner(), agentClass=DoNothingAgent)\n", "res = runner.run(nb_episode=2, nb_process=1, max_iter=10)\n", "for path_chron, episode_id, total_reward, nb_iter, max_iter in res:\n", " print(\"Total reward for episode {} is {}\".format(episode_id, total_reward))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The use of `make` + `runner` makes it easy to assess the performance of trained agent. Beside, Runner has been particularly integrated with other tools and makes easy the replay / post analysis of the episode. It is the recommended method to use in grid2op." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 2 }