{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training Agent, action converters and l2rpn_baselines\n", "Try me out interactively with: [![Binder](./img/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is recommended to have a look at the [00_SmallExample](00_SmallExample.ipynb), [02_Observation](02_Observation.ipynb) and [03_Action](03_Action.ipynb) notebooks before getting into this one." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Objectives**\n", "\n", "In this notebook we will expose :\n", "* how to make grid2op compatible with *gym* RL framework (short introduction to *gym_compat* module)\n", "* how to transform grid2op actions / observations with gym \"spaces\" (https://gym.openai.com/docs/#spaces)\n", "* how to train a (naive) Agent using reinforcement learning.\n", "* how to inspect (rapidly) the actions taken by the Agent.\n", "\n", "**NB** In this tutorial, we will use the \n", "\n", "This notebook do not cover the use of existing RL frameworks. Please consult the [11_IntegrationWithExistingRLFrameworks](11_IntegrationWithExistingRLFrameworks.ipynb) for such information! \n", "\n", "\n", "**Don't hesitate to check the grid2op module grid2op.gym_compat for a closer integration between grid2op and openAI gym. This module is documented at https://grid2op.readthedocs.io/en/latest/gym.html** \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "Execute the cell below by removing the # character if you use google colab !\n", "\n", "Cell will look like:\n", "```python\n", "!pip install grid2op[optional] # for use with google colab (grid2op is not installed by default)\n", "```\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !pip install grid2op[optional] # for use with google colab (grid2Op is not installed by default)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "LEARNING_ITERATION = 50\n", "EVAL_EPISODE = 1\n", "MAX_EVAL_STEPS = 10" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = None\n", "try:\n", " from jyquickhelper import add_notebook_menu\n", " res = add_notebook_menu()\n", "except ModuleNotFoundError:\n", " print(\"Impossible to automatically add a menu / table of content to this notebook.\\nYou can download \\\"jyquickhelper\\\" package with: \\n\\\"pip install jyquickhelper\\\"\")\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0) Good practice\n", "\n", "### A. defining a training, validation and test sets\n", "\n", "As in other machine learning tasks, we highly recommend, before even trying to train an agent, to split the the \"episode data\" (*eg* what are the loads / generations for each load / generator) into 3 datasets:\n", "- \"train\" use to train the agent\n", "- \"val\" use to validate the hyper parameters\n", "- \"test\" at which you would look **only once** to report the agent performance in a scientific paper (for example)\n", "\n", "Grid2op lets you do that with relative ease:\n", "\n", "```python\n", "import grid2op\n", "env_name = \"l2rpn_case14_sandbox\" # or any other...\n", "env = grid2op.make(env_name)\n", "\n", "# extract 1% of the \"chronics\" to be used in the validation environment. The other 99% will\n", "# be used for test\n", "nm_env_train, nm_env_val, nm_env_test = env.train_val_split_random(pct_val=1., pct_test=1.)\n", "\n", "# and now you can use the training set only to train your agent:\n", "print(f\"The name of the training environment is \\\\\"{nm_env_train}\\\\\"\")\n", "print(f\"The name of the validation environment is \\\\\"{nm_env_val}\\\\\"\")\n", "print(f\"The name of the test environment is \\\\\"{nm_env_test}\\\\\"\")\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, you can use the training environment to train your agent:\n", "\n", "```python\n", "import grid2op\n", "env_name = \"l2rpn_case14_sandbox\"\n", "env = grid2op.make(env_name+\"_train\")\n", "```\n", "\n", "Be carefull, on windows you might run into issues. Don't hesitate to have a look at the documentation of this funciton if this the case (see https://grid2op.readthedocs.io/en/latest/environment.html#grid2op.Environment.Environment.train_val_split and https://grid2op.readthedocs.io/en/latest/environment.html#grid2op.Environment.Environment.train_val_split_random)\n", "\n", "### B. Not spending all of your time loading data...\n", "\n", "In most grid2op environment, the \"data\" are loaded from the hard drive.\n", "\n", "From experience, what happens (especially at the beginning of training) is that your agent survives a few steps (so taking a few milliseconds) before a game over. At this stage you will call `env.reset()` which will load the data of the next scenario.\n", "\n", "This is the default behaviour and it is far from \"optimal\" (more time is spent loading data than performing actual useful computation). To that end, we encourage you:\n", "- to use a \"caching\" mechanism, for example with `MultifolderWithCache` class\n", "- to read the data by small \"chunk\" (`env.chronics_handler.set_chunk_size(...)`). \n", "\n", "More information is provided in https://grid2op.readthedocs.io/en/latest/environment.html#optimize-the-data-pipeline\n", "\n", "### C. Use a fast simulator\n", "\n", "Grid2op will use a \"backend\" to compute the powerflows and be able to return the next observation (after `env.step(...)`). These \"backends\" can be faster. For example, we strongly encourage you to use the \"lightsim2grid\" backend.\n", "\n", "You can install it with `pip install lightsim2grid`\n", "\n", "And use it with:\n", "```python\n", "import grid2op\n", "from lightsim2grid import LightSimBackend\n", "env_name = \"l2rpn_case14_sandbox\"\n", "env = grid2op.make(env_name+\"_train\", backend=LightSimBackend(), ...)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## I) Action representation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What's this ?\n", "\n", "The Grid2op package has been built with an \"object-oriented\" perspective: almost everything is encapsulated in a dedicated `class`. This allows for more customization of the plateform.\n", "\n", "The downside of this approach is that machine learning methods, especially in deep learning, often prefer to deal with vectors rather than with \"complex\" objects. Indeed, as we covered in the previous tutorials on the platform, we saw that building our own actions can be tedious and can sometime require important knowledge of the powergrid.\n", "\n", "On the contrary, in most of the standard Reinforcement Learning environments, actions have a higher level representation. For example in pacman, there are 4 different types of actions: \"turn left\", \"turn right\", \"go up\" and \"go down\". This allows for easy sampling (if you need to achieve an uniform sampling, you simply need to randomly pick a number between 0 and 3 included) and an easy representation: each action can be represented as a different component of a vector of dimension 4 [because there are 4 actions]. \n", "\n", "On the other hand, this representation is not \"human friendly\". It is quite convenient in the case of pacman because the action space is rather small, making it possible to remember which action corresponds to which component, but in the case of the grid2op package, there are hundreds or even thousands of actions. We suppose that we do not really care about this here, as tutorials on Reinforcement Learning with discrete action space often assume that actions are labeled with integers (such as in pacman for example).\n", "\n", "Converting grid2op actions into \"machine readable\" ones is the major difficulty as there is no unique ways to do so. In grid2op we offer some pre defined \"functions\" to do so:\n", "\n", "- `BoxGymObsSpace` will convert the action space into a gym \"Box\". It is rather straightforward, especially for **continuous** type of actions (such as *redispatching*, *curtailment* or actions on *storage units*). Representing the discrete actions (on powerlines and on substation) is not an easy task with them. We would not recommend to use them if your focus is on topology. More information on https://grid2op.readthedocs.io/en/latest/gym.html#grid2op.gym_compat.BoxGymActSpace\n", "- `MultiDiscreteActSpace` is similar to `BoxGymObsSpace` but mainly focused on the **discrete** actions (*lines status* and *substation reconfiguration*). Actions are represented with a gym \"MultiDiscrete\" space. It allows to perform any number of actions you want (which might be illegal) but comes with little restrictions. It handles continuous actions through \"binning\" (which is not ideal but doable). We recommend using this transformation if the algorithm you want to use is able to deal with \"MultiDiscrete\" gym action type. More information is given at https://grid2op.readthedocs.io/en/latest/gym.html#grid2op.gym_compat.MultiDiscreteActSpace\n", "- `DiscreteActSpace` is similar to `MultiDiscreteActSpace` in the sense that it focuses on **discrete** actions. It comes with a main restriction though: you can only do one action. For example, you cannot \"modify a substation\" AND \"disconnect a powerline\" with the same action. More information is provided at https://grid2op.readthedocs.io/en/latest/gym.html#grid2op.gym_compat.DiscreteActSpace. We recommend to use it if you want to focus on **discrete** actions and the algorithm you want to use is not able to deal with `MultiDiscreteActSpace`.\n", "- You can also fully customize the way you \"represent\" the action. More information is given in the notebook [11_IntegrationWithExistingRLFrameworks](11_IntegrationWithExistingRLFrameworks.ipynb)\n", "\n", "In the next section we will show an agent working with `DiscreteActSpace`. The code showed can be easily adapted with the other type of actions.\n", "\n", "### Create a gym compatible environment\n", "\n", "The first step is to \"convert\" your environment into a gym environment, for easier manipulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.gym_compat import GymEnv\n", "import grid2op\n", "from gym import Env\n", "from gym.utils.env_checker import check_env\n", "try:\n", " from lightsim2grid import LightSimBackend\n", " bk_cls = LightSimBackend\n", "except ImportError as exc:\n", " print(f\"Error: {exc} when importing faster LightSimBackend\")\n", " from grid2op.Backend import PandaPowerBackend\n", " bk_cls = PandaPowerBackend\n", " \n", "env_name = \"l2rpn_case14_sandbox\"\n", "training_env = grid2op.make(env_name, test=True, backend=bk_cls()) # we put \"test=True\" in this notebook because...\n", "# it's a notebook to explain things. Of course, do not put \"test=True\" if you really want\n", "# to train an agent...\n", "gym_env = GymEnv(training_env)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "isinstance(gym_env, Env)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "check_env(gym_env, warn=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, `gym_env` is really an environment from gym. It also meets the gym requirements and pass the checks performed by gym.\n", "\n", "By default however, the action space and observation space are Dictionnaries, which is not convenient for most machine learning algorithm (they need to be transformed into vectors somehow)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gym_env.action_space" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gym_env.observation_space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is why it is often a good idea to customize the environment to have proper observations and actions. \n", "\n", "This is covered more in depth on the notebook [11_IntegrationWithExistingRLFrameworks](11_IntegrationWithExistingRLFrameworks.ipynb) (especially for the observation space). \n", "\n", "Here we use simple converters to focus on the training of the agent. You can set them with:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.gym_compat import DiscreteActSpace\n", "gym_env.action_space = DiscreteActSpace(training_env.action_space,\n", " attr_to_keep=[\"set_bus\" , \"set_line_status_simple\"])\n", "gym_env.action_space" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.gym_compat import BoxGymObsSpace\n", "gym_env.observation_space = BoxGymObsSpace(training_env.observation_space,\n", " attr_to_keep=[\"rho\"])\n", "gym_env.observation_space" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now your agent will receive a \"box\" as observation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "obs = gym_env.reset()\n", "obs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And will be able to interact with the environment with standard \"int\" (integers that are the normal way to represent \"Discrete\" gym space)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "obs, reward, done, info = gym_env.step(0) # perform action labeled 0\n", "obs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "obs, reward, done, info = gym_env.step(53) # perform action labeled 53\n", "obs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Even better gym environment\n", "The environment show here directly \"maps\" a grid2op environment to a gym environment: at each \"grid2op step\" the agent is asked to chose an action.\n", "\n", "\n", "After a few itérations of the \"l2rpn\" competitions it appears that the vast majority of top performers all rely on \"heuristics\" to handle the problem showed in grid2op.\n", "\n", "For example, most of the time the \"agent\" does not take any actions when all the flows are bellow a certain threshold (say 90%) and only do actions when at least one powerline sees a flow above this limit. \n", "\n", "In `l2rpn_baselines` we embeded directly the possibility to have a \"gym environment\" that does some of these heuristics. For example, the \"gym environment\" keeps performing \"steps\" (with the do nothing action) while all the flows are bellow a certain threshold. Otherwise it asks for an action at the agent. In this setting, the agent is directly trained with the heuristics used at inference time.\n", "\n", "This is available in:\n", "\n", "```python\n", "from l2rpn_baselines.utils import GymEnvWithReco, GymEnvWithRecoWithDN\n", "```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## II) Train an Agent\n", "\n", "In this tutorial we will train an agent that uses only discrete actions thanks to the \"stable baselines 3\" RL package. \n", "\n", "More precisely, we will use the \"PPO\" algorithm. We do not cover the details of such algorithm here.\n", "\n", "**NB** here we show a minimal complete code to get started. We recommend however to use the l2rpn_baselines `PPO_SB3` model (that integrates all this, which much better customization) for such purpose." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from stable_baselines3 import PPO\n", "nn_model = PPO(env=gym_env,\n", " learning_rate=1e-3,\n", " policy=\"MlpPolicy\",\n", " policy_kwargs={\"net_arch\": [100, 100, 100]},\n", " n_steps=2,\n", " batch_size=8,\n", " verbose=True,\n", " )" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nn_model.learn(total_timesteps=LEARNING_ITERATION)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Important notes\n", "\n", "A few notes here:\n", "- the meta parameters (\"net_arch\", \"learning_rate\", \"n_steps\", \"gamma\", \"batch_size\", ...) are not fine tuned and are unlikely to lead to good results\n", "- the number of training steps is way too low (it should be much greater, probably in >= 1M)\n", "- there is no \"heuristics\" embeded here. Agents performs best when they act only when the grid is \"in danger\" (otherwise it takes a while for an agent to learn to \"do nothing\")\n", "- we use a \"test\" environment, this is why we specified \"test=True\" at the environment creation.\n", "- observations are not scaled\n", "- reward is not tuned for a particular goal: default reward is used and this might not be a good things to do for the particular agent we are trying to build.\n", "\n", "All of these probably lead to a quite terrible agent...\n", "\n", "### Other important notes\n", "\n", "We do not recommend to train \"real\" model with such simple features. In particular, you might need to save (at different stage of training) your agent, log the progress using tensorboard (for example) etc.\n", "\n", "This can be done with \"callbacks\" here that are not covered in this \"getting started\".\n", "\n", "\n", "### Yet another important note\n", "If you chose to use \"l2rpn baselines\" it will be as easy to learn an agent taking into account account all of the above if you us:\n", "\n", "```python\n", "from l2rpn_baselines.utils import GymEnvWithRecoWithDN\n", "from l2rpn_baselines.PP0_SB3 import train\n", "```\n", "\n", "You can have a look at the \"examples\" section of the l2rpn baselines repository (https://github.com/rte-france/l2rpn-baselines/tree/master/examples)\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## III) Evaluating the Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, it is time to test the agent that we trained on different scenarios (ideally) using another environment (to mimic a real process)\n", "\n", "To do that, we have two main choices:\n", "1) we use the trained model, convert the other environment to a gym env, run the environment and ask for the neural network what it should od\n", "2) embed the trained neural network in a \"grid2op agent\" class and use the built grid2op runner transparently\n", "\n", "Solution 1) is to be preferred for \"quick and dirty\" tests. In all other cases we strongly recommend to use solution 2 as it also allows you to not recode the \"gym loop\" to benefit from the saving functionality (as so to be able to inspect your agent in grid2viz for example) etc.\n", "\n", "We will code a minimal agent to leverage solution 2. As always, this is much easier to use with l2rpn_baselines package...\n", "\n", "### III A) Create the \"grid2op agent\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.Agent import BaseAgent\n", "\n", "class MyPPOAgent(BaseAgent):\n", " def __init__(self,\n", " action_space, # grid2op action space\n", " used_trained_gym_env, # gym env used for training\n", " trained_ppo_nnet, # neural net after training\n", " ):\n", " super().__init__(action_space)\n", " self.nn_model = trained_ppo_nnet\n", " self.gym_env = used_trained_gym_env\n", " \n", " def act(self, observation, reward, done=False):\n", " gym_obs = self.gym_env.observation_space.to_gym(observation)\n", " gym_act, _ = self.nn_model.predict(gym_obs, deterministic=True)\n", " grid2op_act = self.gym_env.action_space.from_gym(gym_act)\n", " return grid2op_act\n", " \n", "my_agent = MyPPOAgent(training_env.action_space, gym_env, nn_model)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NB** Of course, if you use \"l2rpn_baselines\" you are not forced to recode it. Everything is properly set up for you to use directly the \"trained l2rpn_baselines agent\" in the grid2op runner !" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### III.B) Perform the evaluation\n", "\n", "Now we create the test environment (in this case it's the same as the training one, but we really do recommend to have different environment with different data...)\n", "\n", "For a real experiment this could look like:\n", "\n", "```python\n", "training_env = grid2op.make(env_name+\"_test\", ...) \n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "testing_env = grid2op.make(env_name, test=True, backend=bk_cls()) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from l2rpn_baselines.DuelQSimple import evaluate\n", "import shutil\n", "from tqdm.notebook import tqdm\n", "from grid2op.Runner import Runner\n", "from grid2op.Episode import EpisodeData\n", "\n", "save_path = \"saved_agent_PPO_{}\".format(LEARNING_ITERATION)\n", "path_save_results = \"{}_results\".format(save_path)\n", "shutil.rmtree(path_save_results, ignore_errors=True)\n", "\n", "test_runner = Runner(**testing_env.get_params_for_runner(),\n", " agentInstance=my_agent, agentClass=None)\n", "res = test_runner.run(nb_episode=EVAL_EPISODE,\n", " max_iter=MAX_EVAL_STEPS,\n", " pbar=tqdm,\n", " path_save=f\"./{path_save_results}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### III.B) Inspect the Agent " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Please refer to the official documentation for more information about the contents of the directory where the data is saved. Note that saving the information is triggered by the \"path_save\" argument of the \"runner.run\" function.\n", "\n", "The information contained in this output will be saved in a structured way and includes :\n", "For each episode :\n", " - \"episode_meta.json\": json file that represents some meta information about:\n", "\n", " - \"backend_type\": the name of the `grid2op.Backend` class used\n", " - \"chronics_max_timestep\": the **maximum** number of timesteps for the chronics used\n", " - \"chronics_path\": the path where the temporal data (chronics) are located\n", " - \"env_type\": the name of the `grid2op.Environment` class used.\n", " - \"grid_path\": the path where the powergrid has been loaded from\n", "\n", " - \"episode_times.json\": json file that gives some information about the total time spent in multiple parts of the runner, mainly the\n", " `grid2op.Agent` (and especially its method `grid2op.Agent.act`) and the\n", " `grid2op.Environment`\n", "\n", " - \"_parameters.json\": json representation of the `grid2op.Parameters` used for this episode\n", " - \"rewards.npy\": numpy 1d-array giving the rewards at each time step. We adopted the convention that the stored\n", " reward at index `i` is the one observed by the agent at time `i` and **NOT** the reward sent by the\n", " `grid2op.Environment` after the action has been taken.\n", " - \"exec_times.npy\": numpy 1d-array giving the execution time for each time step in the episode\n", " - \"actions.npy\": numpy 2d-array giving the actions that have been taken by the `grid2op.Agent`. At row `i` of \"actions.npy\" is a\n", " vectorized representation of the action performed by the agent at timestep `i` *ie.* **after** having observed\n", " the observation present at row `i` of \"observation.npy\" and the reward showed in row `i` of \"rewards.npy\".\n", " - \"disc_lines.npy\": numpy 2d-array that tells which lines have been disconnected during the simulation of the cascading failure at each\n", " time step. The same convention has been adopted for \"rewards.npy\". This means that the powerlines are\n", " disconnected when the `grid2op.Agent` takes the `grid2op.Action` at time step `i`.\n", " - \"observations.npy\": numpy 2d-array representing the `grid2op.Observation` at the disposal of the\n", " `grid2op.Agent` when he took his action." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can first look at the repository were the data is stored:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.listdir(path_save_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see there are 2 folders, each corresponding to a chronics.\n", "There are also additional json files.\n", "\n", "Now let's see what is inside one of these folders:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "EpisodeData.list_episode(path_save_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example, we can load the actions chosen by the Agent, and have a look at them.\n", "\n", "To do that, we will load the action array and use the `action_space` function to convert it back to `Action` objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "all_episodes = EpisodeData.list_episode(path_save_results)\n", "this_episode = EpisodeData.from_disk(*all_episodes[0])\n", "li_actions = this_episode.actions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This allows us to have a deeper look at the actions, and their effects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will inspect the actions that has been taken by the agent :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "line_disc = 0\n", "line_reco = 0\n", "for act in li_actions:\n", " dict_ = act.as_dict()\n", " if \"set_line_status\" in dict_:\n", " line_reco += dict_[\"set_line_status\"][\"nb_connected\"]\n", " line_disc += dict_[\"set_line_status\"][\"nb_disconnected\"]\n", "print(f'Total reconnected lines : {line_reco}')\n", "print(f'Total disconnected lines : {line_disc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, during this episode, our agent never tries to disconnect or reconnect a line." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also analyse the observations of the recorded episode :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "li_observations = this_episode.observations\n", "nb_real_disc = 0\n", "for obs_ in li_observations:\n", " nb_real_disc += (obs_.line_status == False).sum()\n", "print(f'Total number of disconnected powerlines cumulated over all the timesteps : {nb_real_disc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look at the kind of actions that the agent chose:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actions_count = {}\n", "for act in li_actions:\n", " act_as_vect = tuple(act.to_vect())\n", " if not act_as_vect in actions_count:\n", " actions_count[act_as_vect] = 0\n", " actions_count[act_as_vect] += 1\n", "print(\"The agent did {} different valid actions:\\n\".format(len(actions_count)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The actions chosen by the agent were :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for act in li_actions:\n", " print(act)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## IV) Improve your Agent \n", "\n", "As we stated above, the goal of this notebook was not to show a \"state of the art\" agent but rather to explain with a \"minimal example\" how you could get started into training an agent operating a powergrid.\n", "\n", "To improve your agent, we strongly recommend to use the l2rpn_baselines repository, to fine tune the hyper parameters of your agents, to train it for longer on more diverse data, to use a different RL algorithm (maybe PPO is not the best one for this environment ?) etc. etc.\n", "\n", "Using some pre training (for example training the policy in a supervised fashion using heuristic of human demonstration data) might be a good solution.\n", "\n", "Another promising tool would be to use some \"curiculum learning\" where, at the beginning, the agent would interact with a simplified version of the environment (easier rules, no disconnection due to overflow, no cooldown after an action has been made, etc.) and after some time, when the agent is starting to perform well to increase this difficulty.\n", "\n", "Yet another possibility would be to access some kind of \"model based\" reinforcement learning, such as alpha-* models (*eg* alpha go)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 2 }