{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Reminder\n", "\n", "Try me out interactively with: [![Binder](./img/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)\n", "\n", "## Goals\n", "- Keep the grid safe\n", "- Avoid blackouts\n", "- Cost efficiency \n", "\n", "# Live Demo 1: How to perform actions\n", "\n", "## 3 types of actions:\n", " \n", "- Changing the status of a powerline\n", "- Changing the topology of a substation\n", "- Changing the setpoint of generator\n", "\n", "## 0) Import the environment" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import grid2op\n", "import numpy as np # recommended\n", "env = grid2op.make(\"l2rpn_neurips_2020_track1\", test=True) # i do a test, i set \"Test=True\" otherwise i don't specify anything\n", "env.seed(42)\n", "obs = env.reset()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And with just that, we are ready to go!\n", "\n", "## 1) Overall information about the environment" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "from grid2op.PlotGrid import PlotMatplot\n", "plot_helper = PlotMatplot(env.observation_space)\n", "\n", "_ = plot_helper.plot_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also plot the actual state of the grid visible by the agent, also called \"observation\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "_ = plot_helper.plot_obs(obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2) Change the status of the powerline\n", "The first type of action is the switching on / off of powerline.\n", "\n", "This is rather easy to do in grid2op :-)\n", "\n", "**cost of the action**: 0\n", "\n", "**cooldown**: 3 time steps (for the affected lines)\n", "\n", "**maximum**: maximum 1 powerline affected per action" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "l_id = 3\n", "disconnect_line_3 = env.action_space({\"set_line_status\": [(l_id, -1)]})\n", "print(disconnect_line_3)\n", "next_obs, reward, done, extra_information = env.step(disconnect_line_3)\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"I need to wait {} timesteps before using this line again\".format(next_obs.time_before_cooldown_line[l_id]))\n", "print(\"Be carefull, powerline 0 is in overflow for {} timestep\".format(next_obs.timestep_overflow[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now let's try to connect the powerline." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "reconnect_line_3 = env.action_space({\"set_line_status\": [(l_id, +1)]})\n", "print(reconnect_line_3)\n", "next_obs1, reward1, done1, extra_information1 = env.step(reconnect_line_3)\n", "_ = plot_helper.plot_obs(next_obs1)\n", "print(\"Was this action illegal? {}\".format(extra_information1[\"is_illegal\"]))\n", "print(\"I need to wait {} timesteps before using this line again\".format(next_obs1.time_before_cooldown_line[l_id]))\n", "print(\"Be carefull, powerline 0 is in overflow for {} timestep\".format(next_obs1.timestep_overflow[0]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "do_nothing = env.action_space()\n", "next_obs2, reward2, done2, extra_information2 = env.step(do_nothing)\n", "_ = plot_helper.plot_obs(next_obs2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "next_obs3, reward3, done3, extra_information3 = env.step(do_nothing)\n", "print(\"Is the game over? {}\".format(done3))\n", "# plot the \"last\" observation\n", "next_obs3.line_status[extra_information3[\"disc_lines\"]] = False\n", "_ = plot_helper.plot_obs(next_obs3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can affect the status of a powerline with:\n", "\n", "| action | original status | final status |\n", "|-------------|-----------------|--------------|\n", "| {\"set_line_status\": [(l_id, -1)]} | Connected | **Dis**connected |\n", "| {\"set_line_status\": [(l_id, +1)]} | Connected | Connected |\n", "| {\"set_line_status\": [(l_id, -1)]} | **Dis**connected | **Dis**connected |\n", "| {\"set_line_status\": [(l_id, +1)]} | **Dis**connected | Connected |\n", "| {\"change_line_status\": [l_id]} | Connected | **Dis**connected |\n", "| {\"change_line_status\": [l_id]} | **Dis**connected | Connected |" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3) Change the topology of a given substation\n", "\n", "How is it possible?\n", "\n", "![](./img/powergrid_zoom2.png)\n", "\n", "![](./img/powergrid_zoom3.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All topologies if a substation counts 4 elements:\n", " \n", "![](./img/all_topo.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Changing the topology\n", "\n", "**cost of the action**: 0\n", "\n", "**cooldown**: 3 time steps (for the affected substation)\n", "\n", "**maximum**: 1 substation affected per action (regardless of the element changed)\n", "\n", "### How hard is this \"topology\" problem ?\n", "\n", "How many possible actions?\n", "\n", "How manypossible topologies?\n", "\n", "### And in grid2op?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Topology can also, in some cases save the situation **for free** (just changing some switches) here is an example.\n", "\n", "Let's imagine for some reason the powerline from **26** to **27** is disconnected, for example a light storm hit it. Suppose, for the sake of the example that this powerline will be out of order for a few days, meaning you cannot reconnect it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env.set_id(1)\n", "env.seed(1)\n", "obs = env.reset() # remember we broke everything in our previous demonstration, so we restart properly\n", "_ = plot_helper.plot_obs(obs)\n", "contingency_id = 55\n", "next_obs, reward, done, extra_information = env.step(env.action_space({\"set_line_status\": \n", " [(contingency_id, -1)]}))\n", "# imagine we are in this state\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This does not look great at all, let's see what happen if we don't do anything." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ts_lived = 0\n", "while not done:\n", " next_obs, reward, done, extra_information = env.step(do_nothing)\n", " ts_lived += 1\n", "print(\"If i don't do anything i could survive {} time steps :-(\".format(ts_lived))\n", "next_obs.line_status[extra_information[\"disc_lines\"]] = False\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's replay to demonstrate the proper topological action can solve the issue" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env.set_id(1)\n", "env.seed(1)\n", "obs = env.reset() # remember we broke everything in our previous demonstration\n", "_ = plot_helper.plot_obs(obs)\n", "next_obs, reward, done, extra_information = env.step(env.action_space({\"set_line_status\": \n", " [(contingency_id, -1)]}))\n", "# imagine we are in this state\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's implement the \"magic\" action now and see what happens." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "action_saved = env.action_space({\"set_bus\": {'loads_id': [(17, 1)], # i want to set to busbar 1 the load of id 27\n", " 'generators_id': [(5, 2), (6, 2), (7, 2), (8, 1)], # busbar 1 for the generator of id 14\n", " 'lines_or_id': [(22, 1), (23, 1), (27, 2), (28, 1), (48, 1), (49, 1), (54, 2)],\n", " 'lines_ex_id': [(17, 2), (18, 2), (19, 2), (20, 1), (21, 2)]}})\n", "next_obs, reward, done, extra_information = env.step(action_saved)\n", "# imagine we are in this state\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ts_lived_now = 0\n", "while not done:\n", " next_obs, reward, done, extra_information = env.step(do_nothing)\n", " ts_lived_now += 1\n", " if ts_lived_now >= 60:\n", " break\n", "print(\"If i do the action above i can survive {} time steps (we stopped the 'game' after 60 time steps)\"\n", " \"\".format(ts_lived_now))\n", "next_obs.line_status[extra_information[\"disc_lines\"]] = False\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Instead of dying after 9 time steps if we do nothing, for the same cost (zero) we can survive at least an extra 51 time steps (comptuation is stopped here because afterwards exogeneous factors appear and it would require other actions, which is beyond the scope of this demonstration).\n", " \n", "Be carefull at not doing something a bit \"risky\" though... For example, taking a \"random\" action at a substation can lead to dramatic effects all over the grid." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env.set_id(1) # I am \"cheating\" a bit, for the demonstration, i specify which chronics i want to study\n", "obs = env.reset() # remember we broke everything in our previous demonstration\n", "_ = plot_helper.plot_obs(obs)\n", "s_id = 26\n", "print(\"The elements connected to the substation {} are: {}\".format(s_id,\n", " env.get_obj_connect_to(substation_id=s_id)))\n", "action = env.action_space({\"set_bus\": {'loads_id': [(27, 1)], # i want to set to busbar 1 the load of id 27\n", " 'generators_id': [(14, 1)], # busbar 1 for the generator of id 14\n", " 'lines_or_id': [(40, 1), (41, 2)],\n", " 'lines_ex_id': [(36, 1), (37, 2), (38, 1), (39, 1), (56, 2)]}})\n", "print(action)\n", "next_obs, reward, done, extra_information = env.step(action)\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4) Redispatching\n", "\n", "Remember in our case we suppose that the \"market\" / \"economic dispatch\" has been made, each producer know exactly what it will produce exactly for the entire scenario at a 5 mins resolution, and this is \"economic dispatch\" is supposed to be cost efficient.\n", "\n", "Performing a redispatching is telling some producer that they have to increase the production at a given place, and decrease it at another place. This has the consequence to make the grid more costly. Suppose the new situation (after redispatching) was less costly for the producers, when the \"economic dispatch\" was run, they would bewith this fictive state. This would mean the \"market\" / \"economic dispatch\" was not optimal in the first place, which (in our setting) is not possible.\n", "\n", "This explains why redispatching action have an intrisic cost. For our competition we decided to penalize redispatching proportionally to the amount of redispatching performed (see the competition description for a more detailed formula). \n", "\n", "This entails that, if you redispatch say 10MW you \"pay\" a cost of 10MW * the marginal cost of the grid (for simplification we say it is the highest cost of the turned on generator at any time).\n", "\n", "\n", "**cost of the action**: proportional to amount of energy dispatched\n", "\n", "**cooldown**: none (you can cancel it the next step)\n", "\n", "**maximum**: none (you can act on as many generator as you want)\n", "\n", "**/!\\\\** **/!\\\\** **/!\\\\** A redispatching will be \"modified\" by the environment before being implemented. This is because it must meet really difficult constraint: it should sum to 0, make sure every generator is between pmin and pmax, make sure, for each generator difference between the production at the next time step and the current time step is in a feasible range (ramping). Finally not all generator are dispatchable (solar and wind energy source are not). This can make the implementation of redispatching a bit tricky. **/!\\\\** **/!\\\\** **/!\\\\**\n", "\n", "Let's now see how it is performed in grid2op." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env.set_id(1)\n", "env.seed(1)\n", "obs = env.reset() # remember we broke everything in our previous demonstration, so we restart properly\n", "_ = plot_helper.plot_obs(obs)\n", "next_obs, reward, done, extra_information = env.step(env.action_space({\"set_line_status\": \n", " [(contingency_id, -1)]}))\n", "# imagine we are in this state\n", "_ = plot_helper.plot_obs(next_obs, load_info=None)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "action_redisp = env.action_space({\"redispatch\": [\n", " (0, +1.4), (3, +8.8),\n", " (10, +2.8), (13, +2.8),\n", " (21, -9.9), (19, -2.8)\n", "]})\n", "next_obs, reward, done, extra_information = env.step(action_redisp)\n", "_ = plot_helper.plot_obs(next_obs, load_info=None)\n", "\n", "price_t = np.max(next_obs.gen_cost_per_MW[next_obs.prod_p > 0.]).astype(float)\n", "loss_cost = (np.sum(next_obs.prod_p) - np.sum(next_obs.load_p)) * price_t\n", "resdisp_cost = np.sum(np.abs(next_obs.actual_dispatch)) * price_t\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A few notes here:\n", "\n", "- the maximum possible for each generator were asked in the action\n", "- some generator were dispatchable but were not turned off, though they could have helped\n", "- dispite our effort, no powerlines is \"saved\"\n", "\n", "\n", "Let's see if we pursue in this direction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "next_obs, reward, done, extra_information = env.step(action_redisp)\n", "_ = plot_helper.plot_obs(next_obs, load_info=None)\n", "\n", "price_t = np.max(next_obs.gen_cost_per_MW[next_obs.prod_p > 0.]).astype(float)\n", "loss_cost += (np.sum(next_obs.prod_p) - np.sum(next_obs.load_p)) * price_t\n", "resdisp_cost += np.sum(np.abs(next_obs.actual_dispatch)) * price_t" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "next_obs, reward, done, extra_information = env.step(action_redisp)\n", "_ = plot_helper.plot_obs(next_obs, load_info=None)\n", "\n", "price_t = np.max(next_obs.gen_cost_per_MW[next_obs.prod_p > 0.]).astype(float)\n", "loss_cost += (np.sum(next_obs.prod_p) - np.sum(next_obs.load_p)) * price_t\n", "resdisp_cost += np.sum(np.abs(next_obs.actual_dispatch)) * price_t" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"The cost of performing the redispatching action for the 3 time steps is ${:.0f}.\".format(resdisp_cost))\n", "print(\"In the mean time the cost due to Joule's effect is ${:.0f}, for the whole grid!\".format(loss_cost))\n", "\n", "ts_lived_now = 0\n", "while not done:\n", " next_obs, reward, done, extra_information = env.step(do_nothing)\n", " ts_lived_now += 1\n", " if ts_lived_now >= 60:\n", " break\n", "print(\"If i do the action above i can survive {} time steps (we stopped the 'game' after 60 time steps)\"\n", " \"\".format(ts_lived_now))\n", "next_obs.line_status[extra_information[\"disc_lines\"]] = False\n", "_ = plot_helper.plot_obs(next_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So what is the take away here:\n", "- redispatching action are tricky, because generators cannot be modified at will (physical constraint)\n", "- solving entirely the same problem as above was really difficult here, we could not resolve the issue on powerline 22 -> 26\n", "- the cost of this action is rather high, especially compared to the cost (almost 0) of performing a topological action" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Live Demo 2: how to make an agent\n", "\n", "The best way to start is to have a look at the l2rpn_baselines packages. It makes the creation and loading / evaluation of agent rather easy. Of course it is rather meant to expose what can be done with grid2op, the \"baselines\" showed there are not especially well performing.\n", "\n", "## 1) Train an agent" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# example of training an agent\n", "from l2rpn_baselines.DuelQSimple import train\n", "from l2rpn_baselines.utils import NNParam, TrainingParam\n", "\n", "train_iter = 1000\n", "\n", "\n", "agent_name = \"test_agent\"\n", "save_path = \"saved_agent_DDDQN_BDA_{}\".format(train_iter)\n", "logs_dir=\"tf_logs_DDDQN\"\n", "\n", "\n", "\n", "# we then define the neural network we want to make (you may change this at will)\n", "## 1. first we choose what \"part\" of the observation we want as input, \n", "## here for example only the generator and load information\n", "## see https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes\n", "## for the detailed about all the observation attributes you want to have\n", "li_attr_obs_X = [\"prod_p\", \"prod_v\", \"load_p\", \"load_q\"]\n", "# this automatically computes the size of the resulting vector\n", "observation_size = NNParam.get_obs_size(env, li_attr_obs_X) \n", "\n", "## 2. then we define its architecture\n", "sizes = [300, 300, 300] # 3 hidden layers, of 300 units each, why not...\n", "activs = [\"relu\" for _ in sizes] # all followed by relu activation, because... why not\n", "## 4. you put it all on a dictionnary like that (specific to this baseline)\n", "kwargs_archi = {'observation_size': observation_size,\n", " 'sizes': sizes,\n", " 'activs': activs,\n", " \"list_attr_obs\": li_attr_obs_X}\n", "\n", "# you can also change the training parameters you are using\n", "# more information at https://l2rpn-baselines.readthedocs.io/en/latest/utils.html#l2rpn_baselines.utils.TrainingParam\n", "tp = TrainingParam()\n", "tp.batch_size = 32 # for example...\n", "tp.update_tensorboard_freq = int(train_iter / 10)\n", "tp.save_model_each = int(train_iter / 3)\n", "tp.min_observation = int(train_iter / 5)\n", "\n", "# which actions i keep (on this small environment i CANNOT train an agent to perform the 66k actions)\n", "kwargs_converters = {\"all_actions\": None,\n", " \"set_line_status\": False,\n", " \"change_line_status\": True,\n", " \"change_bus_vect\": False,\n", " \"set_topo_vect\": False\n", " }\n", " \n", "train(env,\n", " name=agent_name,\n", " iterations=train_iter,\n", " save_path=save_path,\n", " load_path=None, # put something else if you want to reload an agent instead of creating a new one\n", " logs_dir=logs_dir,\n", " kwargs_archi=kwargs_archi,\n", " training_param=tp,\n", " kwargs_converters=kwargs_converters,)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logs are saved in the \"tf_logs_DDDQN\" log repository. To watch what happens during training, you can type the command (from a bash command line for example):\n", "```\n", "tensorboard --logdir='tf_logs_DDDQN'\n", "```\n", "You can even do it while it's training. Tensorboard allows you to monitor, during training, different quantities, for example the loss of your neural network or even the last number of steps the agent performed before getting a game over etc.\n", "\n", "At first glimpse here is what it could look like (only the first graph out of :\n", "\n", "\n", "Monitoring of the training | Representation as a graph of the neural network\n", ":-------------------------:|:-------------------------:\n", "![](./img/tensorboard_example.png) | ![](./img/tensorboard_graph.png)\n", "\n", "## 2) Evaluate the agent" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from l2rpn_baselines.DuelQSimple import evaluate\n", "path_save_results = \"{}_results\".format(save_path)\n", "\n", "evaluated_agent, res_runner = evaluate(env,\n", " name=agent_name,\n", " load_path=save_path,\n", " logs_path=path_save_results,\n", " nb_episode=2,\n", " nb_process=1,\n", " max_steps=100,\n", " verbose=True,\n", " save_gif=False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for _, _, score, nb_ts, total_ts in res_runner:\n", " print(\"The final score is {:.0f} and {}/{} time steps were successfully performed\".format(score, nb_ts, total_ts))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course you can also have a look at your agent log using the dedicated GUI that we developed for this competition.\n", "\n", "This is called \"grid2viz\" and will output something like this (**NB** this was logs generated for a smaller environment, this is NOT the log of this agent)\n", "\n", "\n", "## 3) And now you simply need to submit your agent on the codalab platform :-)\n", "\n", "This is beyond the scope of this tutorial. The starting kit of these competitions, that you can download once registered are here to guide you" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }