{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# An RL application: managing the powergrid\n", "\n", "You can try this notebook interactively with (click on the logo): [![Binder](./img/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)\n", "\n", "\n", "During this session we present an application to reinforcement learning in a \"real world\" scenario.\n", "\n", "This notebook is organized as followed:\n", "\n", "In the first section we present the basics of powergrid operations (*ie* what some people are doing 24/7 to ensure the rest of the world has access to as much power as possible). \n", "\n", "In the second section, we will expose how to interact with a powergrid using the openAI gym interface and see how to apply some algorithm to solve these problems.\n", "\n", "At the end of this lecture you should be able to implement some (state of the art?) algorithm that manges the powergrid for a few days.\n", "\n", "**Disclaimer**: This notebook presents the \"powergrid problem\" in a very simplified version. This \"powergrid problem\" (as modelized for the l2rpn competitions [Robustness](https://competitions.codalab.org/competitions/25426) and [Adaptability](https://competitions.codalab.org/competitions/25427)) is itself only a small part of what powergrid operators need to do in real time. This notebook does not pretend at all to be exhaustive in the description of all of these problematics.\n", "\n", "\n", "This notebook is structured as followed:\n", "\n", "1. [Problem statement](#Problem-statement)\n", " 1. [1) Introduction to powergrid](#1\\)-Introduction-to-powergrid)\n", " 2. [2) Definition of the reward](#2\\)-Definition-of-the-reward)\n", " 1. [The joule's effect and the sag](#The-joule's-effect-and-the-sag)\n", " 2. [Thermal limits and protections](#Thermal-limits-and-protections)\n", " 3. [Margin on a powerline](#Margin-on-a-powerline)\n", " 3. [3) Game over](#3\\)-Game-over)\n", " 1. [Game over conditions](#Game-over-conditions)\n", " 2. [Operate in safety](#Operate-in-safety)\n", " 4. [4) Action space](#4\\)-Action-space)\n", " 1. [Modifying the status of the powerlines](#Modifying-the-status-of-the-powerlines)\n", " 2. [Redispatching](#Redispatching)\n", " 1. [first constraints: P = C + losses => sum(redispatch) = 0](#first-constraints:-P-=-C-+-losses-=>-sum\\(redispatch\\)-=-0)\n", " 2. [second contraints: ramps => actual dispatch != target dispatch](#second-contraints:-ramps-=>-actual-dispatch-!=-target-dispatch)\n", " 3. [third constraints: the pmin / pmax](#third-constraints:-the-pmin-/-pmax)\n", " 5. [5) Wrapping up](#5\\)-Wrapping-up)\n", " 6. [6) Optional: diving deeper into grid2op](#6\\)-Optional:-diving-deeper-into-grid2op)\n", " 1. [Dataset](#Dataset)\n", " 2. [Specific rules](#Specific-rules)\n", "2. [Creating some agents](#Creating-some-agents)\n", " 1. [1) Compatibility with gym](#1\\)-Compatibility-with-gym)\n", " 1. [Convert the environment to gym](#Convert-the-environment-to-gym)\n", " 2. [Convert the observation space to gym](#Convert-the-observation-space-to-gym)\n", " 3. [Convert the action space](#Convert-the-action-space)\n", " 2. [2) Some Baselines](#2\\)-Some-Baselines)\n", " 1. [Do nothing \"agent\"](#Do-nothing-\"agent\")\n", " 2. [Random Agent and operational constraints](#Random-Agent-and-operational-constraints)\n", " 3. [3) Going further](#3\\)-Going-further)\n", " \n", " \n", "If you want to get directly to the definition of the problem, you can visit the section [5) Wrapping up](#5\\)-Wrapping-up) and if you prefer to dive into the creation of your first agent, then you can switch directly to [Creating some agents](#Creating-some-agents) TODO\n", "\n", "## Problem statement\n", "\n", "\n", "### 1) Introduction to powergrid\n", "For this notebook we will use a dedicated environment called \"educ_case14_redisp\". Grid2op comes with many different environments, with different problems etc. In this notebook, we will only mention and explain this specific environment.\n", "\n", "Power system have one major objective: allow the transmission of electricity from the producers to the consumers as effeciently as possible.\n", "\n", "This environment is based on the \"IEEE case14\" grid studied in the litterature.\n", "\n", "This grid counts 20 power lines (represented in the figure bellow by the black line), 11 loads (each one representing a city or a big industrial consumer) represented by the triangles in the figure bellow and 6 different generators.\n", "\n", "Then generators are represented by pentagons. In reality, and as modeled by our environments, generators have varying properties depending on their types. In this environment there are 2 solars generators (in dark orange), 1 hydro generator (in dark blue), 1 nuclear generator (in yellow) and one \"thermal\" generator (you can imagine powered by coal or natural gas) in violet. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np # 99% of python scripts (for data science) use this package\n", "import grid2op # main package\n", "from grid2op.PlotGrid import PlotMatplot # for representing (plotting) the grid\n", "env = grid2op.make(\"educ_case14_redisp\", test=True) # creating the environment (compatible with openai gym)\n", "plot_helper = PlotMatplot(env.observation_space)\n", "fig = plot_helper.plot_gen_type()\n", "max_iter = 10 # we put 10 here so that the notebook run entirely in a few seconds. put -1 if you don't want to enforce this limitation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**GOAL**: The goal of the \"game\" you will play is the following: given some phyiscal constraints (more on that on the next section) your agent will need to adjust the productions of the dispatchable generators as to minimize the \"margin\" of each powerline on the grid.\n", "\n", "### 2) Definition of the reward\n", "\n", "#### The joule's effect and the sag\n", "In reality, each powerline cannot transmit an infinite amount of power. \n", "\n", "Due to some constraints: for example, when too much current pass on a line, because of the Joule's effect, this powerline heats. And because powerlines are made in metal, when they heat they inflate and so get closer to the ground... or... your house...\n", "\n", "This phenomenon is illustrated in the figure below:\n", "\n", "The flow is relatively small, the powerline is far above the tree\n", "![title](img/sag0.jpg)\n", "\n", "The flow increases, the powerline gets closer to the tree\n", "![title](img/sag1.jpg)\n", "\n", "The flow is too high, the powerline gets closer even closer to the tree and ends up touching it\n", "![title](img/sag2.jpg)\n", "\n", "Which can, in the best case break the powerline or, depending on the season, the weather, etc. set aflame a whole part of a state (this is the cause of some of the fire in California in 2020) possibly killing dozens of people.\n", "\n", "\n", "#### Thermal limits and protections\n", "\n", "To avoid being in such trouble, companies operating powergrids, often set up some limits on the flows that can be transmitted on the powergrid. For the sake of the example, we displayed the this limit (called thermal limit) on each powerline on the grid in the next cell." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_helper.assign_line_palette() # get a prettier coloring for the powerlines\n", "_ = plot_helper.plot_info(line_values=env.get_thermal_limit(), coloring=\"line\", line_unit=\"A\")\n", "plot_helper.restore_line_palette()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And to be certains that the flows stays below these limits, every powerlines (on our simplified setting) is equiped with some \"protections\".\n", "\n", "The detailed functionning of these protections is out of the scope of this lecture, but the principle is rather simple. A protection is a piece of equipment that will automatically disconnect a powerline if it detects some danger.\n", "\n", "For this environment these \"protections\" will automatically disconnect some powerlines when:\n", "\n", "- the flow on the powerline is above the thermal limit for too long (3 consecutive steps)\n", "- the flow on the powerline is exceed twice the thermal limit (in this case the disconnection is instanteneous)\n", "\n", "#### Margin on a powerline\n", "\n", "Now that we know the current is limited, we can properly define the \"margin\" on a powerline. The margin can be thought as the amount of current that the powerline can still transmit without any danger.\n", "\n", "More formally, if we denote by `i` the current on a given powerline and by `M` its thermal limit then the margin is defined as, for this environment as:\n", "```\n", "relative_flow = i / M\n", "raw_margin = 1 - i / M\n", "raw_margin_capped = max(0., raw_margin) # to ensure the margin is between 0 and 1\n", "margin_powerline = raw_margin_capped^2\n", "```\n", "\n", "The \"reward\" of this game is then the sum of all the `margin_powerline` for all powerline of the grid. Note that if a powerline is disconnected, with this setting its margin is 1.0 (there is no powerflow on it) and if the powerline is on overflow (the flow `i` is equal or above the thermal limit `M`) the the margin is 0.0, which make sense because in this case, you can't have any more flows on this powerline.\n", "\n", "\n", "### 3) Game over\n", "So now we have defined what we need to do: at any time we need to maximise the total margin of the grid. This corresponds to taking as less risk as possible during the powergrid operations.\n", "\n", "\n", "#### Game over conditions\n", "But as we saw, a pretty simple \"policy\" would solve this problem very easily. As we explained above, when a powerline is disconnected from the grid, then the margin on this powerline is 1.0, which is the highest possible margin. The policy consisting in \"disconnecting every powerline\" would then be extremely efficient. This makes total sense: if everything is disconnected, the no power is flowing and everything is safe [this is often why I would recommend to switch off the house circuit breaker before trying to touch any powerplug in your house.]\n", "\n", "But this solution is not satisfactory. The goal of the powergrid is indeed to bring power to as much people / companies as possible. In our framework grid2op we modeled this phenomenon by introducing some \"game over\" criteria. And, when the game is over, you lost the game.\n", "\n", "To be perfectly exhaustive, there are 4 game over conditions for this environment at the moment:\n", "\n", "- a load is disconnected from the grid\n", "- a generator is disconnected from the grid\n", "- the grid is split into independant part\n", "- some technical conditions imposed by the solving of the powerflow equations (out of scope)\n", "\n", "\n", "#### Operate in safety\n", "So as you see above, the \"game over conditions\" make the simple policy of disconnecting everything pretty useless. Because, well, it will game over at the first time step.\n", "\n", "In this environment, the objective is then to manage different scenarios, each representing a given day and avoid game over.\n", "\n", "\n", "### 4) Action space\n", "So now, let's dive into the details of what your actions are.\n", "\n", "For this envrionment you are allowed to do two different things:\n", "\n", "- toggle the status of some powerlines: this means connecting or disconnecting some powerline\n", "- apply redispatching actions: adapt the production of the generators to solve the issues.\n", "\n", "In this section we'll detail what are exactly doing these actions, and how to implement them in the grid2op framework.\n", "\n", "#### Modifying the status of the powerlines\n", "Powerlines can be switched on / off. We saw a bit earlier that they can be switched off by \"protections\" in case of overflow (see the section [Thermal limit and protections](#Thermal-limits-and-protections) for more information).\n", "\n", "But the agent can also willingly disconnect a powerline and of course, symmetrically, reconnect it.\n", "\n", "Let's plot the initial observation of the powergrid:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "obs = env.reset()\n", "fig_ = plot_helper.plot_obs(obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On plot we see different things. \n", "\n", "First, There are some color on some powerlines. This means that the flow on them is less than 50% of the thermal limits. Then we see some powerlines that are orange. \n", "In this case this means that the flow on them is between 50% and 100% of their thermal limit. If a powerline were to appear red, this would means it's in overflow.\n", "On each powerline, the flows (given by default as a percentage of the thermal limit) is displayed near the powerline.\n", "\n", "We also notice some values on the generator: this is the amount of power they are producing, and some values on the load (negative) representing the values they are consuming.\n", "\n", "And now let's disconnect the powerline with id `14` and display the resulting state on the screen.\n", "\n", "(you can change which line is disconnected by changing the value of the `line_id` variable in the cells bellow)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "line_id = 14\n", "disco_line = env.action_space({\"change_line_status\": [line_id]})\n", "new_obs, reward, done, info = env.step(disco_line)\n", "fig_ = plot_helper.plot_obs(new_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On this case we can notice that this powerline (at the top) is now displayed in a dark dashed fashion. This means that it is disconnected.\n", "\n", "#### Redispatching\n", "Now that we saw how to disconnect some powerlines, let's see how to perform the other kind of actions available: the redispatching.\n", "\n", "##### What is redispatching\n", "But first, what is concretely redispatching? \n", "\n", "Redispatching is the fact to modify the schedule of generators (schedule that was planned usually the day before most of the time either by a central authority or by an economic market). \n", "\n", "This is done in a cumulative manner compared to what the real production should be.\n", "\n", "For example if \"someone\" (a central authority or a market) decided that generator 1 were to produce `77.1`MW at 00:10am and you decided at 00:05am to do a redispatching on this same generator of `+7.0`MW, then, the production of this generator at 00:10am will be `77.1MW [decided the day before by the market or a central authority] + (+7.0MW) [decided by you at 00:05am] = 84.1MW`.\n", "\n", "Let's see an example on the cell bellow:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gen_id = 1\n", "dispatch_amount = +7\n", "dispatch_action = env.action_space({\"redispatch\": [(gen_id, dispatch_amount)]})\n", "new_obs, reward, done, info = env.step(dispatch_action)\n", "fig_ = plot_helper.plot_obs(new_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see here, you still see that the powerline is disconnected (of course no one has reconnected it), but you don't really observe the redispatching on the state. Fortunately, you have a way to plot the redispatching that was made. We'll do it in the next cells" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig_ = plot_helper.plot_current_dispatch(new_obs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see on the figure above, the generator 1 has seen indeed its production increased by `7`MW (hence the `7.00`MW) but we can see something else.\n", "\n", "##### first constraints: P = C + losses => sum(redispatch) = 0\n", "The generators 1 and 5 see their production decreased by `1.75` MW (for generator 0) and `5.25` (for generator 5).\n", "\n", "But why is that? This is because a powergrid need to be a steady state. And this implies that the total generation and the total load must be balanced (up to the losses). \n", "\n", "This entails that if you ask to increase a generation somewhere, the environment will have no choice but to decrease the production somewhere else.\n", "\n", "Now let's see what happens if I wand to decrease the generator 1 of `10`MW again. So the total \"setpoint\" for the redispatch on this generator should be `(+7MW) [action previous step] + (+10MW) [action this step] = +17MW`\n", "\n", "Let's see what happens if i do that:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gen_id2 = 1\n", "dispatch_amount2 = +10\n", "dispatch_action2= env.action_space({\"redispatch\": [(gen_id2, dispatch_amount2)]})\n", "new_obs2, reward, done, info = env.step(dispatch_action2)\n", "fig_ = plot_helper.plot_current_dispatch(new_obs2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### second contraints: ramps => actual dispatch != target dispatch\n", "Oh, oh, the dispatch setpoint of the generator 1 is `15.6` but it should be `+17.0`. Why?\n", "\n", "This is not a bug, in fact there are some physical constraints for the generators. You can't have their productions increase (or decrease) too much between two consecutive steps. This is a very important phenomenon that has been taken into account in grid2op.\n", "\n", "Actually, if you look at the difference between the production at this step and the previous observation (that was called `new_obs` and the current one, denoted by `new_obs2`) you will see:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "new_obs2.gen_p[gen_id2] - new_obs.gen_p[gen_id2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And if you compare it to the `maximum ramp down` (which is a fancy word to say \"the maximum value a generator can decrease between two consecutive steps) you will see that the number above matches this physical constraint): " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env.gen_max_ramp_down[gen_id2]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Indeed, the production increases of `10.000053` (~`10.0` up to the rounding). And this `10.0`MW is the maximum allowed by the physical limit of this generator (which is `10.0`MW) (as showed in the cell above `env.gen_max_ramp_down[gen_id2]` is `10.0`).\n", "\n", "This means that if the environment had implemented the entire redispatching action, this would have resulted in breking the generator `1`, because its power would have decreased too much. This is why the environment automatically \"limit\" the action, in this case to `+15.6`MW instead of the `+17`MW in the action.\n", "\n", "This is why there are two distinct information about the redispatching:\n", "- `actual_dispatch` which is the redispatching really implemented on the grid. It alwas sum to 0.0 (up to the rounding factor) and takes into account the limits on the generators\n", "- `target_dispatch` which is the sum of all the redispatching action that the operator wanted to perform.\n", "\n", "You have access to these values in `obs.actual_dispatch` and `obs.target_dispatch`. We also show how to plot the \"actual_dispatch\" and now let's see how we can plot the \"target_dispatch\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig_ = plot_helper.plot_current_dispatch(new_obs2, do_plot_actual_dispatch=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### third constraints: the pmin / pmax\n", "Ok so now let's see a third (and last) type of physical constraints that are important for generators.\n", "\n", "In reality, just like you cannot run at 50km.h^-1 or drive a bike at 200km.h^-1, or your car cannot drive at 600km.h^-1 or a plane cannot fly at 100.000km.h^-1, there are maximum values (and minimum) values a generator can produce.\n", "\n", "Any attempt to go above pmax or bellow pmin will be \"limited\" by the environment. For example, if a generator is operated at pmax, and you ask to increase its production, then the action will not be implemented on the grid with the same mechanism detailed above.\n", "\n", "### 5) Wrapping up\n", "\n", "Congratulations, you just learn a new game based on a real industrial system !\n", "\n", "This new \"game\" can be summarize as maintaining in safety a powergrid (see sections [Game over conditions](#Game-over-conditions) and [Operate in safety](#Operate-in-safety)) for a certain amount of time (in this environment a day) while using some actions consisting in \"[Modifying the status of the powerlines](#Modifying-the-status-of-the-powerlines)\" or performing \"[Redispatching](#Redispatching)\".\n", "\n", "Your reward will be sum of the margin on the grid (see [Margin on a powerline](#Margin-on-a-powerline)). It is highest when the flows are minimal. It is 0.0 after a game for the rest of the scenarios and always positive. \n", "\n", "As any reinforcement learning setting, you need to maximise this reward.\n", "\n", "Good luck !\n", "\n", "\n", "### 6) Optional: diving deeper into grid2op\n", "For this lecture it's probably not necessary to go into such details, but you might encounter some \"hard to explain\" behaviour when you building your first agent. \n", "\n", "#### Dataset\n", "For this environment you have at your disposal a few scenarios each representing the equivalent of a day (24 hours) of grid operation at a 5 mins resolution. \n", "\n", "The datasets are located at:\n", "```python\n", "print(f\"Data are located at {env.chronics_handler.path}\")\n", "```\n", "And you can list what is in there" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "all_days = sorted(os.listdir(env.chronics_handler.path))\n", "all_days" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A grid2op typical dataset is made of scenarios. Each scenario has a name (in this case the name matches the day this scenario represents).\n", "\n", "So for this environment you have 7 different days. For each day, a \"grid dataset\" is made (in this case) of 4 csv:\n", "- \"load_p.csv.bz2\" represents the \"active loads\" (how much power people will consume at each steps)\n", "- \"prod_p.csv.bz2\" represents the \"active generation\" (how much power powerplant will produce at each steps). Note that this represents their production in case there are no redispatching.\n", "\n", "The other file are important for grid2op but are how of the scope of this notebook.\n", "\n", "You can list them with the command below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sorted(os.listdir(os.path.join(env.chronics_handler.path, all_days[0])))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Specific rules\n", "One more thing that might be usefull.\n", "\n", "In grid2op there are different kinds of rules that \"model\" some constraints the powergrid operators have to deal with in reality.\n", "\n", "These \"constraints\" are similar to \"rules\" in video games. See the section [Random Agent and operational constraints](#Random-Agent-and-operational-constraints) for more information about some of them.\n", "\n", "The main rules used for this environment are:\n", "\n", "- **cooldown**: if you act on a powergrid, you have to wait a few number (3 in this case) of steps before you can act on the same powerline again\n", "- **restoration time**: if a powerline is disconnected because of the \"protections\" (see section [Thermal limits and protections](#Thermal-limits-and-protections) ) you have to wait 12 steps before you can reconnect it. This model the fact that in reality you have to send some team of people to inspect that the powerline can be safely set back into service.\n", "\n", "If an action would violate these rules, it is replaced by \"do nothing\" (see [Random Agent and operational constraints](#Random-Agent-and-operational-constraints) for a more detail explanation).\n", "\n", "\n", "We remind also that the environment also takes into account some physical constraints:\n", "\n", "Some powerline are disconnected when they are in overflow for too long, or when their overflow is too high (see section [Thermal limits and protections](#Thermal-limits-and-protections) ) and there is nothing the agent can do about that (beside of course reduce the flow on the concerned powerlines).\n", "\n", "The redispatching implemented is not what the agent asked for. This is mainly due to the physical aspect of powergrids that are detailed in section [Redispatching](#Redispatching). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating some agents\n", "\n", "Ok now we know all about the problems we want to solve in this environment. We recall that this problem is explained in more detail in the [5) Wrapping up](#5\\)-Wrapping-up) section (with hyper text links pointing to most relevant questions if you have any interrogation or need a reminder.\n", "\n", "### 1) Compatibility with gym\n", "Grid2op is a framework fully compatible with openAI-gym. This means that there are the same interface (`env`, `env.step`, `env.reset`, `agent.act` etc.) but this does not entails that it is a gym environment. \n", "\n", "Actually, for people who don't know openAI gym (and this is the case of most people having a \"power system\" background) we did not want to include more complexity to this problem.\n", "\n", "However we put a lot of effort to make the use of grid2op with this framework as easy as possible.\n", "\n", "\n", "#### Convert the environment to gym\n", "To benefit from the openAI gym integration, the first thing you need to do is converting the grid2op environment to an open AI gym environment. In the next cell we show an example on how to do it. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import gym\n", "from grid2op.gym_compat import GymEnv\n", "gym_env = GymEnv(env)\n", "print(f\"Is env and open AI gym environment: {isinstance(env, gym.Env)}\")\n", "print(f\"Is gym_env and open AI gym environment: {isinstance(gym_env, gym.Env)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NB** some feature of grid2op are only available when the environment is presented as a grid2op environment and not as a gym environment. These exact features are out of the scope of this notebook.\n", "\n", "#### Convert the observation space to gym\n", "\n", "Stricly speaking, the observation space and the action space are now gym spaces. as is shown in the next figure.\n", "\n", "There are of type `Dict` with the keys being the attribute present of the observation (or the action) and the values depends on the type of variable. It is often \"Boxes\" for floating pointing numbers (continuous variables, such as the flows on the powerline) and \"MultiBinary\" for discrete variable for example for the status (connected / disconnected) of the powerline.\n", "\n", "The observation state, in this case is:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gym_env.observation_space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ouf that is a lot of information, there are plenty of data available and we don't really know what to do with all that. Let's simplify it a bit and keep only some interesting component:\n", "\n", "- `line_status`: that tells whether or not a powerline is connected or disconnected\n", "- `rho`: which represents the flows, expressed in percentage of the thermal limit (see [Thermal limits and protections](#Thermal-limits-and-protections))\n", "- `actual_dispatch`: which represents the current state of redispatching (see [Redispatching](#Redispatching))\n", "- `target_dispatch`: the setpoint given by the agent until then" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ob_space = gym_env.observation_space\n", "ob_space = ob_space.keep_only_attr((\"rho\", \"line_status\", \"actual_dispatch\", \"target_dispatch\", \"time_before_cooldown_line\"))\n", "gym_env.observation_space = ob_space\n", "gym_env.observation_space" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ob_space.spaces.keys()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Much better now, isn't it ?\n", "\n", "For example, let's sample an observation, to see what type of data an agent will receive:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gym_env.observation_space.sample()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Oh but we see here that the dispatch (sampled, remember that open AI gym by default don't take into account all the information given in part [Problem statement](#Problem-statement)) \n", "\n", "It's considered a good practice in generator to feed a neural network with data approximately in range [-1, 1]. Again, scaling the data in grid2op is relatively easy, and can be done the following way:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.gym_compat import ScalerAttrConverter\n", "ob_space = gym_env.observation_space\n", "ob_space = ob_space.reencode_space(\"actual_dispatch\", \n", " ScalerAttrConverter(substract=0.,\n", " divide=env.gen_pmax,\n", " init_space=ob_space[\"actual_dispatch\"])\n", " )\n", "ob_space = ob_space.reencode_space(\"target_dispatch\",\n", " ScalerAttrConverter(substract=0.,\n", " divide=env.gen_pmax,\n", " init_space=ob_space[\"target_dispatch\"])\n", " )\n", "gym_env.observation_space = ob_space\n", "gym_env.observation_space.sample()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Much better now, all the variables are in range [-1, 1]. \n", "\n", "So to summarize what we did in this subpart:\n", "\n", "- first we extracted some relevant information of the observation space\n", "- then we noticed some values were out of a \"normal\" range so we rescale them\n", "\n", "This way, the agent will only have access to relevant information (preventing overfitting for example) and it will be able to learn as efficiently as possible.\n", "\n", "#### Convert the action space\n", "\n", "As for the observation space, we will also try to convert the action space.\n", "\n", "But first, let's see what it looks like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "act_space = gym_env.action_space\n", "act_space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we have to kind of actions:\n", "- \"*change_line_status*\": if we want to change the status of a powerline (for example connect it if it is disconnected, or disconnect it if it's connected) cf. [Modifying the status of the powerlines](#Modifying-the-status-of-the-powerlines)\n", "- \"*redispatch*\": if we want to apply redispatching actions cf. [Redispatching](#Redispatching)\n", "\n", "Let's see how an action should look like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "act_space.sample()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The line status change is pretty straight forward: we put 1 when we want to toggle the status of the powerline and 0 other.\n", "\n", "What might be a bit complicated here is the encoding of the redispatching. We have to pass a vector of floating point numberd and continuous actions are always more difficult than discrete actions. It this section we will encode this space so that, for each generator you will have 3 choices: \n", "- dispatch positively (increase the production of this generator)\n", "- dispatch at all\n", "- dispatch negatively (decrease the production of this generator)\n", "\n", "To that end we will" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.gym_compat import ContinuousToDiscreteConverter\n", "act_space = act_space.reencode_space(\"redispatch\",\n", " ContinuousToDiscreteConverter(nb_bins=3,\n", " init_space=act_space[\"redispatch\"])\n", " )\n", "gym_env.action_space = act_space\n", "act_space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ok great so now everything is discrete, which should be much better. Let's see how an action looks like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "act_space.sample()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2) Some Baselines\n", "\n", "#### Do nothing \"agent\"\n", "The first baseline we can implement is the \"do nothing\" agent. This agent will simply .... do nothing, under any circumstances.\n", "\n", "Let's see how it performs here by evaluating it on the first week (remember each episode represents a day, so to evaluate it on a full week we need to run it on 7 episodes).\n", "\n", "In fact this agent is so simple that we will not even bother writing a class for it :-)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nb_day = 1 # put 7 for a \"real test\" we put 1 to increase speed during tests\n", "from tqdm.notebook import tqdm # for better progress bars\n", "\n", "# build the do nothing action\n", "do_nothing_act = gym_env.action_space.sample()\n", "do_nothing_act[\"change_line_status\"][:] = 0 # i enforce the fact \"i don't modify any line status\"\n", "do_nothing_act[\"redispatch\"][:] = 1 # i enforce the fact \"i don't modify any generators\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# run the episode\n", "cum_reward = []\n", "nb_ts_survived = []\n", "for i in tqdm(range(nb_day)):\n", " obs = gym_env.reset()\n", " reward = gym_env.reward_range[0]\n", " done = False\n", " ts = 0\n", " tmp_cum_reward = 0.\n", " while not done:\n", " obs, reward, done, info = gym_env.step(do_nothing_act)\n", " ts += 1\n", " tmp_cum_reward += reward\n", " if max_iter != -1 and ts >= max_iter:\n", " break\n", " nb_ts_survived.append(ts)\n", " cum_reward.append(tmp_cum_reward)\n", "print(f\"Over the {nb_day} the do nothing agent survived on average {np.mean(nb_ts_survived):.2f} / 288 steps\")\n", "print(f\"Its average reward is {np.mean(cum_reward):.2f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Random Agent and operational constraints\n", "Now that we know how to do nothing, well kind of, let's build or first acting agent, an agent that takes random actions.\n", "\n", "Before we do that, we need to explain some \"operational constraints\", that are similar to \"rules\" in video games.\n", "\n", "A first rule is that you cannot change the status of more than 1 powerline at each step. This constraints comes from the fact that today humans operators are in command of the grid and humans are not known to multi task really well, especially when there are so much at stake. To reduce the possibility of errors, it is asked to one human to only do one thing at a time.\n", "\n", "A second rule comes from the material. A breaker on a powergird do not exactly look like the switch on your house. A typical breaker can look like this:\n", "![](./img/circuit_breaker.png)\n", "*image credit*: https://www.globalspec.com/learnmore/electrical_electronic_components/electrical_distribution_protection_equipment/circuit_breakers\n", "\n", "And well... who would want to operate this thing by turning it on and off again at the risk of breaking it ? No one really want that, this is why there is some \"cooldown time\". When you act on a breaker at a given step, you cannot act on it in the next 2 or 3 steps.\n", "\n", "\n", "If you \"break these rules\", your action is replaced by \"do nothing\" automatically. This is exactly what happens in video games. If you face a wall, a \"rule\" prevent you to cross it (in most video games at least). And most of the time, if you try to \"go forward\" when facing this wall, your character will not move an inch. This is the same for the powergrid. If a wrong \"command\" is sent to the system, nothing is done at all.\n", "\n", "Given that, let's see how we can do a random agent that meets this constraint." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class RandomAgentWithConstraints:\n", " def __init__(self, action_space):\n", " self.action_space = action_space\n", " self.do_nothing_act = action_space.sample()\n", " self.do_nothing_act[\"change_line_status\"][:] = 0 # i enforce the fact \"i don't modify any line status\"\n", " self.do_nothing_act[\"redispatch\"][:] = 1 # i enforce the fact \"i don't modify any generators\"\n", " \n", " def act(self, obs, reward, done):\n", " # generate a random action that do not meet the oeprationnal constraints\n", " res = self.action_space.sample()\n", " # select only one powerline among the powerline that are supposed to be disconnected\n", " # in the random action\n", " # and set the other one to 0\n", " ls = res[\"change_line_status\"]\n", " if np.any(ls > 0):\n", " id_ = np.random.choice(np.where(ls)[0], 1)\n", " ls[:] = 0\n", " ls[id_] = 1\n", " return res" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cum_reward_2 = []\n", "nb_ts_survived_2 = []\n", "random_with_const = RandomAgentWithConstraints(gym_env.action_space)\n", "for i in tqdm(range(nb_day)):\n", " obs = gym_env.reset()\n", " reward = gym_env.reward_range[0]\n", " done = False\n", " ts = 0\n", " tmp_cum_reward = 0.\n", " while not done:\n", " action = random_with_const.act(obs, reward, done)\n", " obs, reward, done, info = gym_env.step(action)\n", " ts += 1\n", " tmp_cum_reward += reward\n", " if max_iter != -1 and ts >= max_iter:\n", " break\n", " nb_ts_survived_2.append(ts)\n", " cum_reward_2.append(tmp_cum_reward)\n", "print(f\"Over the {nb_day} the random agent survived on average {np.mean(nb_ts_survived_2):.2f} / {max_iter} steps\")\n", "print(f\"Its average reward is {np.mean(cum_reward_2):.2f}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, and this is pretty common in \"real systems\" (as opposed to video games) doing nothing is most of the time a good strategy: the \"do nothing agent\" survived on average 205 steps while an agent that takes random actions only surive 3-4 steps (depending on which random actions are taken)\n", "\n", "This is mainly due to the disconnection of powerlines.\n", "\n", "\n", "### 3) Going further\n", "\n", "1. Try to implement an agent that only takes random redispatching action\n", "2. Can you code an agent that reconnect powerlines when they are disconnected, and only then ?\n", "3. Could you imagine a way to have a good performing agent ?" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }