{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# In this notebook you will learn basic information about redispatching capabilities offered by grid2op." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Objectives**\n", "\n", "As for now, we presented a type of action available in grid2op: a discrete action space. Redispatching is a kind of continuous action that will be described here.\n", "\n", "This notebook will:\n", "\n", "- present what is redispatching\n", "- show how it can be used in grid2op\n", "- detail the actions related to it\n", "- show an example of a redispatching Agent." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import grid2op\n", "from grid2op.Agent import DoNothingAgent, Agent\n", "from tqdm.notebook import tqdm\n", "import numpy as np\n", "max_iter = 100 # to make computation much faster we will only consider 50 time steps instead of 287\n", "import pdb" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
run previous cell, wait for 2 seconds
\n", "" ], "text/plain": [ "" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res = None\n", "try:\n", " from jyquickhelper import add_notebook_menu\n", " res = add_notebook_menu()\n", "except ModuleNotFoundError:\n", " print(\"Impossible to automatically add a menu / table of content to this notebook.\\nYou can download \\\"jyquickhelper\\\" package with: \\n\\\"pip install jyquickhelper\\\"\")\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is redispatching" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "TODO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How to implement redispatching actions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Having a suitable environment" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Is this environment suitable for redispatching: True\n" ] } ], "source": [ "env_wrong = grid2op.make(\"case5_example\")\n", "print(\"Is this environment suitable for redispatching: {}\".format(env_wrong.redispatching_unit_commitment_availble))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, on the cell above, the simple environment example is not suitable for redispatching. By default, some environments doesn't specify the cost of generators, their maximum and minimum production values etc. In this case it is not possible to use this grid2op feature. \n", "\n", "To know more about what is needed for using redispatching, it is advised to look at the online help https://grid2op.readthedocs.io/en/latest/space.html#grid2op.Space.GridObjects.redispatching_unit_commitment_availble for the most recent documentation. When this notebook was created, what was needed was:\n", "\n", "- \"gen_type\": the type of generator\n", "- \"gen_pmin\": the minimum value a generator can produce\n", "- \"gen_pmax\" : the maximum value a generator can produce\n", "- \"gen_redispatchable\": is this generator can be dispatched\n", "- \"gen_max_ramp_up\": the maximum increase of power a generator can have between two consecutive time steps\n", "- \"gen_max_ramp_down\": the maximum decrease of power a generator can have between two consecutive time steps\n", "- \"gen_min_uptime\": the minimum time a generator need to be turned on (it's impossible to disconnect it if it's not connected for a least \"gen_min_uptime\" consecutive time step)\n", "- \"gen_min_downtime\": same as above, but for down time\n", "- \"gen_cost_per_MW\": the generation cost. For each MW how much is paid\n", "- \"gen_startup_cost\": the cost to start a generator\n", "- \"gen_shutdown_cost\": the cost to shutdown a generator\n", "\n", "We made available a dedicated environment, based on the IEEE case14 powergrid that has all this features. It is advised to use this small environment for testing and get familiar with this feature.\n", "\n", "This environment counts 5 generators, like the original case14 system. It has one solar and one wind generator (that cannot be dispatched), one nuclear powerplant (dispatchable) and 2 thermal generators (dispatchable also). This problem is then a problem of continuous control with 3 degress of freedom." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/donnotben/Documents/Grid2Op_dev/getting_started/grid2op/MakeEnv.py:652: UserWarning:\n", "\n", "Your are using only 2 chronics for this environment. More can be download by running, from a command line:\n", "python -m grid2op.download --name \"case14_redisp\" --path_save PATH\\WHERE\\YOU\\WANT\\TO\\DOWNLOAD\\DATA\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Is this environment suitable for redispatching: True\n" ] } ], "source": [ "env = grid2op.make(\"case14_redisp\")\n", "print(\"Is this environment suitable for redispatching: {}\".format(env.redispatching_unit_commitment_availble))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can notice 2 things:\n", "\n", "- There is a warning about the number of scenarios available for this small environment. 2 scenarios are include in the pypi package. Please see the documentation if you want more.\n", "- This environment is indeed suitable for redispatching. It means all quantities described above are set (and visible).\n", "\n", "In the L2RPN 2019 challenges, we rewarded participants based on the utilization of the powerline. In next challenges, or for other usage of this platform where redispatching plays a role, it's better to consider the economic cost of the sytem. However, usually the cost is minimized, while the reward is maximized. To take this into account, a simplistic reward named \"EconomicReward\" has been created. It has the following property:\n", "\n", "- it returns -1 if there has been a game over\n", "- otherwise (no game over, no error, etc.) it is always strictly positive\n", "- maximizing this reward is equivalent to minimizing the cost\n", "\n", "\n", "Note that this reward doesn't take into account the cost to perform a redispatching action. This reward can be used to build what is called an \"economic dispatch\", a problem especially interesting for electricity producers that is of lower interest for Transmission System Operators (as opposed to the topology).\n", "\n", "Compared to standard \"economic dispatch\" problems, for now storages are not implemented (coming soon) and we don't fully take into account startup cost, shutdown cost, as well as min downtime and min uptime (even though all of these features are implemented). Also, note that the redispatching is implemented in \"delta\" it means you need first to provide an economic dispatch, and then you reason in terms of variation compare to it. This is the usage explained in this notebook. For real unit commitment / economic dispatch problem the key words \"injections\" / \"prod_p\" in the action would probably be much suited." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Redispatching implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Unlike topological actions, that are always feasible (assumption made in this package) redispatching actions are limited by physical constraints on the generators. For example:\n", "\n", "- it is not possible for a generator to produce more (resp. less) than pmax (resp. pmin). Unlike the curent flow on the powerline, this is an hard physical constraints.\n", "- it is not possible, for the same physical limitations to increase (or decrease) too much the value between two consecutive time steps.\n", "- redispatching actions stack with one another. It means if you ask to augment the production of generator 1 of $10MW$ at time step t, and of $20MW$ at time step $t+1$, it means the set point at time step $t+1$ will be $+30MW$ at this time step compared to a sate where no redispatching is made (it would have been of $+10MW$ if the second redispatching action were not performed).\n", "\n", "Having said this, a lot of thing can happen, that makes redispatching a bit less trivial than topology:\n", "- When you do a redispatching action, you don't know what the time series of the environment look like. For example, say the pmax of generator 1 is 100. The setpoint of this generator at time t is $60MW$, and you want to increase its value of $40MW$. This action is legal: $60+40 \\leq pmax (=100) $. So at time $t$ everything is fine. Now let's suppose the environment also moved the setpoint of this same generator from $60$ to be at $70MW$ at time $t+1$. With the redispatching action, this would mean the setpoint asked is $70+40 = 110 > pmax$. This is not possible. In this case, the redispatching action will be capped. Among the desired redispatching of $+40$, only $+30$ will be implemented on the powergrid. \n", "- Another problem can arise with the fundamental principle of power grid: power energy cannot be stored on a grid. At each time we then have $\\sum \\text{Prod} = \\sum \\text{Load} + \\text{Losses}$. In this competition, the data are generated such that this condition holds (approximately) for all time steps. This means that if you ask a redispatching of +xxx MW on a given generator, then the other one must compensate and \"absorb\" xxx MW such that it sums at 0 overall.\n", "\n", "Out of simplicity for the participants, there are some \"automaton\" that automatically transform an proposed redispatching action into a valid redispatching action. These automatons basically ensure that the two above-mentionned conditions hold. It explains the differences between \"target_redispatching\" which is the setpoint enter by the agent, and the \"actual_redispatching\" which what has been implemented on the powergrid after these automatons work." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### More cases for ambiguous actions" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ True, True, False, False, True])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "env.gen_redispatchable" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The above vector says which generator is dispatchable and which is not. Any attempt to dispatch a generator that is not dispatchable leads to an ambiguous action." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(True,\n", " Grid2OpException AmbiguousAction InvalidRedispatching InvalidRedispatching('Trying to apply a redispatching action on a non redispatchable generator',))" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "act = env.action_space({\"redispatch\": [(2,+10)]})\n", "act.is_ambiguous()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we see, this action is ambiguous. And this is ambiguous due to `Trying to apply a redispatching action on a non redispatchable generator`.\n", "\n", "Generators have also physical constraints. You cannot ask them to change the active production value too fast, this would damage the generator, and breaking a nuclear plant is often a terrible idea. In grid2op it is implemented as ambiguous action. Trying to go beyond this value will result in an ambiguous action.\n", "\n", "This value is called the \"ramp\" and it's available through the \"max_ramp_up\" attribute. On the next cell, you see the ramp is of $5$ for the first generator, for $10$ for second and last generators. For the other 2 it's irrelevant because they are not dispatchable." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 5., 10., 0., 0., 10.])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "env.gen_max_ramp_up" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Any attempt to go beyond this value will raise an ambiguous error (remember index in python starts at 0)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(True,\n", " Grid2OpException AmbiguousAction InvalidRedispatching InvalidRedispatching('Some redispatching amount are above the maximum ramp up',))" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "act = env.action_space({\"redispatch\": [(0,+10)]})\n", "act.is_ambiguous()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the previous action, we asked the generator 0 to produce 10MW more than it's setpoint. However, the maximum ramp up is only of 5MW. This action is then ambiguous.\n", "\n", "And of course, there are some perfectly valid redispatching action:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(False, None)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "act = env.action_space({\"redispatch\": [(1,+10)]})\n", "act.is_ambiguous()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The setpoint is not the implementation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As said in the preamble of this section, the target dispatching, what we want to achieve (the target), is not equal to the dispatching implemented. To make transparent what is being done, both these values are present in the observation, as shown in the cell below." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 5. , -2.5, 0. , 0. , -2.5])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# perform a valid redispatching action\n", "env.set_id(0) # make sure to use the same environment input data.\n", "obs_init = env.reset() # reset the environment\n", "act = env.action_space()\n", "act = env.action_space({\"redispatch\": [(0,+5)]})\n", "# act = env.action_space({\"redispatch\": [(0,0)]})\n", "obs, reward, done, info = env.step(act)\n", "obs.actual_dispatch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The target dispatch is exactly what we wanted, eg increasing generator with id 0 of +5 MW. To compensate for this increase, both generator 1 and 4 have seen their setpoint diminish from 2.5MW." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[ 5. -2.5 0. 0. -2.5]\n", "[ 5. -2.5 0. 0. -2.5]\n" ] } ], "source": [ "donothing = env.action_space()\n", "obs, reward, done, info = env.step(donothing)\n", "print(obs.actual_dispatch)\n", "obs, reward, done, info = env.step(donothing)\n", "print(obs.actual_dispatch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above cell, we didn't performed any redispatching action. We just do nothing. This example is here to illustrate that, until the original redispatching action is removed (ie until the opposite command is sent), grid2op will continue to apply the previous redispatching configuration over time.\n", "\n", "Here, the original redispatching action is to increase of +5MW generator 0, in this case removing it means decreasing it of 5MW." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 1.30935415, -0.65710904, 0. , 0. , -0.65224512])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "act = env.action_space({\"redispatch\": [(0,-5)]})\n", "# act = env.action_space({\"redispatch\": [(0,0)]})\n", "obs, reward, done, info = env.step(act)\n", "obs.actual_dispatch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we see in the cell above, there are still a residual on the dispatch. This is because of the physical limit of the ramp of the generator 0. It was above it's normal setpoint of +5MW (action at cell 10). We wanted it to return to its setpoint value (action cell 12). But at the same time step, the environment also modified the setpoint of this generator of -1.3MW. The ramp down for this step being $5+1.3 = 6.3 > maxrampdown$, grid2op capped the redispatch occuring at this timestep to $maxrampdown$\n", "\n", "That is why we can see a small part of the dispatch left. If we wait another timestep and do nothing, the generator will likely be in order." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0. 0. 0. 0. 0.]\n", "[0. 0. 0. 0. 0.]\n" ] } ], "source": [ "obs, reward, done, info = env.step(donothing)\n", "print(obs.actual_dispatch)\n", "obs, reward, done, info = env.step(donothing)\n", "print(obs.actual_dispatch)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now everything is set up as it should be. The system is back to its original state. Let's now see what happens if we ask to increase again the value of this generator 0." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 4.69015403, -2.34502005, 0. , 0. , -2.34513398])" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "act = env.action_space({\"redispatch\": [(0,+5)]})\n", "# act = env.action_space({\"redispatch\": [(0,0)]})\n", "obs, reward, done, info = env.step(act)\n", "obs.actual_dispatch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This time we directly see the full (valid) redispatching action is not apply completely. This is due to the same phenomenon that previously. The environment increased the value of this generator, and at the same time, we also ask to increase it of its maximum value. So our action was \"capped\" and only 4.7MW (out of 5) were indeed produced by the generator. At the next time step, the action would be fully implemented.\n", "\n", "To conclude on redispathcin we saw that there is a difference between the value we ask, and the value implemented by the environment. This is mainly to:\n", "\n", "- the implemented vector should sum to 0.\n", "- if a redispatching is close to the maximum value it can take (due to ramping limitation or hard limitations) and at the same time the environment itself \"wanted\" to increase this value, then the physical limitations are respected.\n", "\n", "Redispatching action also last in time. One action must be explicitely canceled to be reset to 0. This cancelation, because of the limitations above-mentionned can take a few time step to be fully effective." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example of use: economic dispatch problem" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a59f3dba7f01498eb85bdd8b41c83dd8", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=0.0, description='step', style=ProgressStyle(description_width='initial')),…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "The cumulative reward with this agent is 121369\n" ] } ], "source": [ "agent = DoNothingAgent(env.action_space)\n", "done = False\n", "reward = env.reward_range[0]\n", "\n", "env.set_id(0) # make sure to evaluate the models on the same experiments\n", "obs = env.reset()\n", "cum_reward = 0\n", "nrow = env.chronics_handler.max_timestep() if max_iter <= 0 else max_iter\n", "gen_p = np.zeros((nrow, env.n_gen))\n", "gen_p_setpoint = np.zeros((nrow, env.n_gen))\n", "load_p = np.zeros((nrow, env.n_load))\n", "rho = np.zeros((nrow, env.n_line))\n", "i = 0\n", "with tqdm(total=max_iter, desc=\"step\") as pbar:\n", " while not done:\n", " act = agent.act(obs, reward, done)\n", " obs, reward, done, info = env.step(act)\n", " data_generator = env.chronics_handler.real_data.data\n", " gen_p_setpoint[i,:] = data_generator.prod_p[data_generator.current_index, :]\n", " gen_p[i,:] = obs.prod_p\n", " load_p[i,:] = obs.load_p\n", " rho[i,:] = obs.rho\n", " cum_reward += reward\n", " i += 1\n", " pbar.update(1)\n", " if i >= max_iter:\n", " break\n", "print(\"The cumulative reward with this agent is {:.0f}\".format(cum_reward))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "let's do the same, but forcing as much redispatching as possible on the cheapest generator." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "aa4f6c06df5e4eeaaddbe9c265fc0a38", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=0.0, description='step', style=ProgressStyle(description_width='initial')),…" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "The cumulative reward with this agent is 97832\n" ] } ], "source": [ "class GreedyEconomic(Agent):\n", " def __init__(self, action_space):\n", " Agent.__init__(self, action_space)\n", " self.do_nothing = action_space()\n", " \n", " def act(self, obs, reward, done):\n", " act = self.do_nothing\n", " if obs.prod_p[0] < obs.gen_pmax[0] - 1 and \\\n", " obs.target_dispatch[0] < (obs.gen_pmax[0] - obs.gen_max_ramp_up[0]) - 1 and\\\n", " obs.prod_p[0] > 0.:\n", " # if the cheapest generator is significantly bellow its maximum cost\n", " if obs.target_dispatch[0] < obs.gen_pmax[0]:\n", " #in theory i can still ask for more\n", " act = env.action_space({\"redispatch\": [(0, obs.gen_max_ramp_up[0])]})\n", " return act\n", " \n", "agent = GreedyEconomic(env.action_space)\n", "done = False\n", "reward = env.reward_range[0]\n", "\n", "env.set_id(0) # reset the env to the same id\n", "obs = env.reset()\n", "cum_reward = 0\n", "nrow = env.chronics_handler.max_timestep() if max_iter <= 0 else max_iter\n", "gen_p = np.zeros((nrow, env.n_gen))\n", "gen_p_setpoint = np.zeros((nrow, env.n_gen))\n", "load_p = np.zeros((nrow, env.n_load))\n", "rho = np.zeros((nrow, env.n_line))\n", "i = 0\n", "with tqdm(total=max_iter, desc=\"step\") as pbar:\n", " while not done:\n", " act = agent.act(obs, reward, done)\n", " obs, reward, done, info = env.step(act)\n", "# print(\"act: {}\".format(act))\n", "# print(\"info: {}\".format(info['exception']))\n", "# if info['exception'] is not None:\n", " if np.abs(np.sum(obs.actual_dispatch)) > 1e-2:\n", " pdb.set_trace()\n", " data_generator = env.chronics_handler.real_data.data\n", " gen_p_setpoint[i,:] = data_generator.prod_p[data_generator.current_index, :]\n", " gen_p[i,:] = obs.prod_p\n", " load_p[i,:] = obs.load_p\n", " rho[i,:] = obs.rho\n", " cum_reward += reward\n", " i += 1\n", " pbar.update(1)\n", " if i >= max_iter:\n", " break\n", "print(\"The cumulative reward with this agent is {:.0f}\".format(cum_reward))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 2 }