{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training Agent, action converters and l2rpn_baselines\n", "Try me out interactively with: [![Binder](./img/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is recommended to have a look at the [0_basic_functionalities](0_basic_functionalities.ipynb), [1_Observation_Agents](1_Observation_Agents.ipynb) and [2_Action_GridManipulation](2_Action_GridManipulation.ipynb) notebooks before getting into this one." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Objectives**\n", "\n", "In this notebook we will expose :\n", "* how to use the \"converters\": these allow to link several different representations of the actions (for example as `Action` objects or integers).\n", "* how to train a (naive) Agent using reinforcement learning.\n", "* how to inspect (rapidly) the action taken by the Agent.\n", "\n", "**NB** In this tutorial, we train an Agent inspired from this blog post: [deep-reinforcement-learning-tutorial-with-open-ai-gym](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). Many other different reinforcement learning tutorials exist. The code presented in this notebook only aims at demonstrating how to use the Grid2Op functionalities to train a Deep Reinforcement learning Agent and inspect its behaviour, but not at building a very smart agent. Nothing about the performance, training strategy, type of Agent, meta parameters, etc, should be retained as a common practice.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import grid2op" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "res = None\n", "try:\n", " from jyquickhelper import add_notebook_menu\n", " res = add_notebook_menu()\n", "except ModuleNotFoundError:\n", " print(\"Impossible to automatically add a menu / table of content to this notebook.\\nYou can download \\\"jyquickhelper\\\" package with: \\n\\\"pip install jyquickhelper\\\"\")\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## I) Manipulating action representation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Grid2op package has been built with an \"object-oriented\" perspective: almost everything is encapsulated in a dedicated `class`. This allows for more customization of the plateform.\n", "\n", "The downside of this approach is that machine learning methods, especially in deep learning, often prefer to deal with vectors rather than with \"complex\" objects. Indeed, as we covered in the previous tutorials on the platform, we saw that building our own actions can be tedious and can sometime require important knowledge of the powergrid.\n", "\n", "On the contrary, in most of the standard Reinforcement Learning environments, actions have a higher representation. For example in pacman, there are 4 different types of actions: turn left, turn right, go up and do down. This allows for easy sampling (if you need to achieve an uniform sampling, you simply need to randomly pick a number between 0 and 3 included) and an easy representation: each action can be represented as a different component of a vector of dimension 4 [because there are 4 actions]. \n", "\n", "On the other hand, this representation is not \"human friendly\". It is quite convenient in the case of pacman because the action space is rather small, making it possible to remember which action corresponds to which component, but in the case of the grid2op package, there are hundreds or even thousands of actions, making it impossible to remember which component corresponds to which action. We suppose that we do not really care about this here, as tutorials on Reinforcement Learning with discrete action space often assume that actions are labeled with integers (such as in pacman for example).\n", "\n", "However, to allow RL agent to train more easily, we allow to make some \"[Converters](https://grid2op.readthedocs.io/en/latest/converters.html)\" whose roles are to allow an agent to deal with a custom representation of the action space. The class [AgentWithConverter](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) is perfect for such usage." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# import the usefull class\n", "import numpy as np\n", "\n", "from grid2op import make\n", "from grid2op.Agent import RandomAgent \n", "max_iter = 100 # to make computation much faster we will only consider 50 time steps instead of 287\n", "train_iter = 1000\n", "env_name = \"rte_case14_redisp\"\n", "env = make(env_name, test=True)\n", "env.seed(0) # this is to ensure all sources of randomness in the environment are reproducible\n", "my_agent = RandomAgent(env.action_space)\n", "my_agent.seed(0) # this is to ensure that all actions made by this random agent will be the same" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And that's it. This agent will be able to perform any action, but instead of going through the description of the actions from a powersystem point of view (ie setting what is connected to what, what is disconnected etc.) it will simply choose an integer with the method `my_act`. This integer will then be converted back to a proper action.\n", "\n", "Here is an example of the action representation as seen by the Agent (here, integers):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for el in range(3):\n", " print(my_agent.my_act(None, None))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below you can see that the `act` function behaves as expected, handling proper `Action` objects:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for el in range(3):\n", " print(my_agent.act(None, None))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NB** lots of these actions are equivalent to the \"do nothing\" action at some point. For example, trying to reconnect a powerline that is already connected will not do anything. The same for topology. If everything is already connected to bus 1, then the action to connect things to bus 1 on the same substation will not affect the powergrid." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## II) Training an Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this tutorial, we will show how to build a Q-learning Agent. Most of the code originated from this blog post (which was deleted) [https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). \n", "\n", "The goal of this notebook is to demonstrate how to train an agent using grid2op framework. The key message is: since grid2op fully implements the gym API, it is rather easy to do. We will use the [l2rpn baselines](https://github.com/rte-france/l2rpn-baselines) repository and implement a Double Dueling Deep Q learning Algorithm. For more information, you can look for the code in the dedicated repository [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DoubleDuelingDQN).\n", "\n", "**Requirements** This notebook requires having `keras` installed on your machine as well as the `l2rpn_baselines` repository.\n", "\n", "As always in these notebooks, we will use the `rte_case14_realistic` test Environment. More data is available if you don't pass the `test=True` parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.A) Defining some \"helpers\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The type of Agent we are using requires a bit of setup, independantly of Grid2Op. We will reuse the code shown in \n", "[https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368) and in [Reinforcement-Learning-Tutorial](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial) from Abhinav Sagar. The code is registered under the *MIT license* found here: [MIT License](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial/blob/master/LICENSE).\n", "\n", "This first section aims at defining these classes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will need to install the l2rpn_baselines library. Since this library is uploaded on Pypi this can be done easily." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"To install l2rpn_baselines, either uncomment the cell below, or type, in a command prompt:\\n{}\".format(\n", " (\"\\t{} -m pip install l2rpn_baselines\".format(sys.executable))))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# !$sys.executable -m pip install l2rpn_baselines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But first let's import the necessary dependencies :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#tf2.0 friendly\n", "import numpy as np\n", "import random\n", "import warnings\n", "import l2rpn_baselines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.B) Adaptation of the inputs\n", "\n", "For many of the Deep Reinforcement Learning problems (for example for model used to play Atari games), the inputs are images and the outputs are integers that encode the different possible actions (typically \"move up\" or \"move down\" in Atari). In our system (the powergrid) it is rather different. We did our best to make the convertion between complex and simple structures easy. Indeed, the use of converters such as ([IdToAct](https://grid2op.readthedocs.io/en/latest/converter.html#grid2op.Converter.IdToAct)) allows easily to:\n", "- convert the class \"Observation\" into vectors automatically\n", "- map the actions from integers to complete `Action` objects defined in the previous notebooks\n", "\n", "In essence, a converter allows to manipulate the \"action space\" of the Agent and is such that:\n", "- the Agent manipulates a simple, custom structure of actions\n", "- the Converter takes care of mapping from this simple structure to complex grid2op Action / Observation objects\n", "- as a whole, the Agent will actually return full Action / Observation objects to the environment while only working with more simple objects\n", "\n", "\n", "#### A note on the converter\n", "To use this converter, the Agent must inherit from the class [`grid2op.Agent.AgentWithConverter`](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) and implement the following interface (shown here as an example):\n", "\n", "\n", "```python\n", "from grid2op.Agent import AgentWithConverter\n", "class MyAgent(AgentWithConverter):\n", " def __init__(self, action_space, action_space_converter=None):\n", " super(MyAgent, self).__init__(action_space=action_space, action_space_converter=action_space_converter)\n", " # for example you can define here all the actions you will consider\n", " self.my_actions = [action_space(),\n", " action_space({\"redispatching\": [0,+1]}),\n", " action_space({\"set_line_status\": [(0,-1)]}),\n", " action_space({\"change_bus\": {\"lines_or_id\": [12]}}),\n", " ...\n", " ]\n", " # or load them from a file for example...\n", " # self.my_action = np.load(\"my_action_pre_selected.npy\")\n", " \n", " # you can also in this agent load a neural network...\n", " self.my_nn_model = model.load(\"my_saved_neural_network_weights.h5\")\n", " \n", " def convert_obs(self, observation):\n", " \"\"\"\n", " This method is used to convert the observation, represented as a class Observation in input\n", " into a \"transformed_observation\" that will be manipulated by the agent\n", " An example here will transform the observation into a numpy array.\n", " \n", " It is recommended to modify it to suit your needs.\n", " \n", " \"\"\"\n", " return observation.to_vect()\n", " \n", " def convert_act(self, encoded_act):\n", " \"\"\"\n", " This method will take an \"encoded_act\" (for example a integer) into a valid grid2op action.\n", " \"\"\"\n", " if encoded_act < 0 or encoded_act > len(self.my_action):\n", " raise RuntimeError(\"Invalid action with id {}\".format(encoded_act))\n", " return self.my_actions[encoded_act]\n", " \n", " def my_act(self, transformed_observation, reward, done=False):\n", " \"\"\"\n", " This is the main function where you can take your decision.\n", " \n", " Instead of:\n", " - calling \"act(observation, reward, done)\" you implement \n", " \"my_act(transformed_observation, reward, done)\"\n", " - this manipulates only \"transformed_observation\" fully flexible as you defined \"convert_obs\"\n", " - and returns \"encoded_action\" that are then digest automatically by \n", " \"convert_act(encoded_act)\" and to return valid actions.\n", " \n", " Here we suppose, as many dqn agent, that `my_nn_model` return a vector of size \n", " nb_actions filled with number between 0 and 1 and we take the action given the highest score\n", " \"\"\"\n", " pred_score = self.my_nn_model.predict(transformed_observation, reward, done)\n", " res = np.argmax(pred_score)\n", " return res\n", "```\n", "And that's it. There is nothing else to do, your agent is ready to learn how to control a powergrid using only these 3 functions.\n", "\n", "\n", "**NB** A few things are worth noting:\n", "- if you use an agent with a converter, do not modify the method **act** but rather change the method **my_act**, this is really important !\n", "- the method **my_act**, which you will have to implement, takes a transformed observation as an argument and must return a transformed action. This transformed action will then be converted back to the original action space to an actual `Action` object in the **act** method, that you must leave unchanged.\n", "- some automatic functions can compute the set of all possible actions, so there is no need to write \"self.my_actions = ...\". This was done as an example only.\n", "- if the converter is properly set up, you don't even need to modify \"convert_obs(self, observation)\" and \"convert_act(self, encoded_act)\" as default convertions are already provided in the implementation. However, if you want to work with a custom observation space and action space, you can modify these two methods in order to have the convertions that you need." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we consider the observation as a whole and do not try any modifications of the features. This means that the vector (the observation) that the agent will receive is going to be really big, not scaled and filled with a lot of information that may not be really useful. It could be tried to select only a subset of the available features and apply a pre-processing function to them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.C) Writing the code of the Agent and train it" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### a) Code of the agent\n", "\n", "Here we show the most interesting part (in this tutorial) of the code that is implemented as a baseline. For a full description of the code, you can go take a look [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DuelQSimple).\n", "\n", "This is the `DuelQ_NN.py` file (we copy pasted the function defined in its base class `l2rpn_baselines.utils.BaseDeepQ.py` for clarity):\n", "\n", "```python\n", "import tensorflow.keras as tfk\n", "class DuelQ_NN(BaseDeepQ):\n", " \"\"\"Constructs the desired deep q learning network\"\"\"\n", " def __init__(self,\n", " nn_params, # neural network meta parameters, defining its architecture\n", " training_param=None # training scheme (learning rate, learning rate decay, etc.)\n", " ):\n", " self.action_size = action_size\n", " self.observation_size = observation_size\n", " HIDDEN_FOR_SIMPLICITY\n", "\n", " def construct_q_network(self):\n", " \"\"\"\n", " It uses the architecture defined in the `nn_archi` attributes.\n", "\n", " \"\"\"\n", " self.model = Sequential()\n", " input_layer = Input(shape=(self.nn_archi.observation_size,),\n", " name=\"observation\")\n", "\n", " lay = input_layer\n", " for lay_num, (size, act) in enumerate(zip(self.nn_archi.sizes, self.nn_archi.activs)):\n", " lay = Dense(size, name=\"layer_{}\".format(lay_num))(lay) # put at self.action_size\n", " lay = Activation(act)(lay)\n", "\n", " fc1 = Dense(self.action_size)(lay)\n", " advantage = Dense(self.action_size, name=\"advantage\")(fc1)\n", "\n", " fc2 = Dense(self.action_size)(lay)\n", " value = Dense(1, name=\"value\")(fc2)\n", "\n", " meaner = Lambda(lambda x: K.mean(x, axis=1))\n", " mn_ = meaner(advantage)\n", " tmp = subtract([advantage, mn_])\n", " policy = add([tmp, value], name=\"policy\")\n", "\n", " self.model = Model(inputs=[input_layer], outputs=[policy])\n", " self.schedule_model, self.optimizer_model = self.make_optimiser()\n", " self.model.compile(loss='mse', optimizer=self.optimizer_model)\n", "\n", " self.target_model = Model(inputs=[input_layer], outputs=[policy])\n", " \n", " def predict_movement(self, data, epsilon, batch_size=None):\n", " \"\"\"\n", " Predict movement of game controler where is epsilon\n", " probability randomly move.\n", " \n", " This was part of the BaseDeepQ and was copy pasted without any modifications\n", " \"\"\"\n", " if batch_size is None:\n", " batch_size = data.shape[0]\n", "\n", " rand_val = np.random.random(batch_size)\n", " q_actions = self.model.predict(data, batch_size=batch_size)\n", "\n", " opt_policy = np.argmax(np.abs(q_actions), axis=-1)\n", " opt_policy[rand_val < epsilon] = np.random.randint(0, self.action_size, size=(np.sum(rand_val < epsilon)))\n", " return opt_policy, q_actions[0, opt_policy]\n", "```\n", "\n", "This is the `DoubleDuelingDQN.py` file:\n", "```python\n", "from grid2op.Agent import AgentWithConverter # all converter agent should inherit this\n", "from grid2op.Converter import IdToAct # this is the automatic converter to convert action given as ID (integer)\n", "# to valid grid2op action (in particular it is able to compute all actions).\n", "\n", "from l2rpn_baselines.DoubleDuelingDQN.DoubleDuelingDQN_NN import DoubleDuelingDQN_NN\n", "class DuelQSimple(DeepQAgent):\n", " def __init__(self,\n", " action_space,\n", " nn_archi,\n", " HIDDEN_FOR_SIMPLICITY\n", " ):\n", " ...\n", " HIDDEN_FOR_SIMPLICITY\n", " ...\n", " # Load network graph\n", " self.deep_q = None ## is loaded when first called or when explicitly loaded\n", " \n", " ## Agent Interface\n", " def convert_obs(self, observation):\n", " \"\"\"\n", " Generic way to convert an observation. This transform it to a vector and the select the attributes\n", " that were selected in :attr:`l2rpn_baselines.utils.NNParams.list_attr_obs` (that have been\n", " extracted once and for all in the :attr:`DeepQAgent._indx_obs` vector).\n", " \"\"\"\n", " obs_as_vect = observation.to_vect()\n", " self._tmp_obs[:] = obs_as_vect[self._indx_obs]\n", " return self._tmp_obs\n", "\n", " def convert_act(self, action):\n", " \"\"\"\n", " calling the convert_act method of the base class.\n", " This is not mandatory as this is the standard behaviour in OOP (object oriented programming)\n", " We only show it here as illustration.\n", " \"\"\"\n", " return super().convert_act(action)\n", " \n", " def my_act(self, transformed_observation, reward, done=False):\n", " \"\"\"\n", " This function will return the action (its id) selected by the underlying \n", " :attr:`DeepQAgent.deep_q` network.\n", " \n", " \n", " Before being used, this method require that the :attr:`DeepQAgent.deep_q` is created. \n", " To that end a call to :func:`DeepQAgent.init_deep_q` needs to have been performed \n", " (this is automatically done if you use baseline we provide and their `evaluate` \n", " and `train` scripts).\n", " \"\"\"\n", " predict_movement_int, *_ = self.deep_q.predict_movement(transformed_observation, epsilon=0.0)\n", " res = int(predict_movement_int)\n", " self._store_action_played(res)\n", " return res\n", "```\n", "\n", "#### b) Training the model\n", "\n", "Now we can define the agent and train it.\n", "\n", "To that extent, we will use the \"train\" method provided in the `l2rpn_baselines` repository.\n", "\n", "**NB** The code below can take a few minutes to run. It's training a Deep Reinforcement Learning Agent after all. It this takes too long on your machine, you can always decrease the \"nb_frame\", and set it to 1000 for example. In this case, the Agent will probably not be really good.\n", "\n", "**NB** It would take much longer to train a good agent." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create an environment\n", "env = make(env_name, test=True) \n", "# don't forget to set \"test=False\" (or remove it, as False is the default value) for \"real\" training\n", "\n", "# import the train function and train your agent\n", "from l2rpn_baselines.DuelQSimple import train\n", "from l2rpn_baselines.utils import NNParam, TrainingParam\n", "agent_name = \"test_agent\"\n", "save_path = \"saved_agent_DDDQN_{}\".format(train_iter)\n", "logs_dir=\"tf_logs_DDDQN\"\n", "\n", "\n", "# we then define the neural network we want to make (you may change this at will)\n", "## 1. first we choose what \"part\" of the observation we want as input, \n", "## here for example only the generator and load information\n", "## see https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes\n", "## for the detailed about all the observation attributes you want to have\n", "li_attr_obs_X = [\"prod_p\", \"prod_v\", \"load_p\", \"load_q\"]\n", "# this automatically computes the size of the resulting vector\n", "observation_size = NNParam.get_obs_size(env, li_attr_obs_X) \n", "\n", "## 2. then we define its architecture\n", "sizes = [300, 300, 300] # 3 hidden layers, of 300 units each, why not...\n", "activs = [\"relu\" for _ in sizes] # all followed by relu activation, because... why not\n", "## 4. you put it all on a dictionnary like that (specific to this baseline)\n", "kwargs_archi = {'observation_size': observation_size,\n", " 'sizes': sizes,\n", " 'activs': activs,\n", " \"list_attr_obs\": li_attr_obs_X}\n", "\n", "# you can also change the training parameters you are using\n", "# more information at https://l2rpn-baselines.readthedocs.io/en/latest/utils.html#l2rpn_baselines.utils.TrainingParam\n", "tp = TrainingParam()\n", "tp.batch_size = 32 # for example...\n", "tp.update_tensorboard_freq = int(train_iter / 10)\n", "tp.save_model_each = int(train_iter / 3)\n", "tp.min_observation = int(train_iter / 5)\n", "train(env,\n", " name=agent_name,\n", " iterations=train_iter,\n", " save_path=save_path,\n", " load_path=None, # put something else if you want to reload an agent instead of creating a new one\n", " logs_dir=logs_dir,\n", " kwargs_archi=kwargs_archi,\n", " training_param=tp)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logs are saved in the \"tf_logs_DDDQN\" log repository. To watch what happens during training, you can type the command (from a bash command line for example):\n", "```\n", "tensorboard --logdir='tf_logs_DDDQN'\n", "```\n", "You can even do it while it's training. Tensorboard allows you to monitor, during training, different quantities, for example the loss of your neural network or even the last number of steps the agent performed before getting a game over etc.\n", "\n", "At first glimpse here is what it could look like (only the first graph out of :\n", "\n", "\n", "Monitoring of the training | Representation as a graph of the neural network\n", ":-------------------------:|:-------------------------:\n", "![](./img/tensorboard_example.png) | ![](./img/tensorboard_graph.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## III) Evaluating the Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, it is time to test the agent that we trained.\n", "\n", "To do that, we have multiple choices.\n", "\n", "We can either re-code the \"DeepQAgent\" class to load the stored weights (that have been saved during training) when it is initialized (this is not covered in this notebook), or we can also directly specify the instance of the Agent to use in the Grid2Op Runner.\n", "\n", "Doing this is fairly simple. First, you need to specify that you won't use the \"*agentClass*\" argument by setting it to ``None``, and secondly you simply have to provide the agent instance to be used as the *agentInstance* argument.\n", "\n", "**NB** If you don't do that, the Runner will be created (the constructor will raise an exception). If you choose to feed the \"*agentClass*\" argument with a class, your agent will be re-instanciated from scratch with this class. If you do re-instanciate your agent and if it is not planned in the class to load pre-trained weights, then **the agent will not be pre-trained** and will be unlikely to perform well on the task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### III.A) Evaluate the Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have \"successfully\" trained our Agent, we will evaluate it. The evaluation can be done classically using a standard Runner with the following code:\n", "\n", "```python\n", "from grid2op.Runner import Runner\n", "from tqdm.notebook import tqdm\n", "# chose a scoring function (might be different from the reward you use to train your agent)\n", "from grid2op.Reward import L2RPNReward\n", "scoring_function = L2RPNReward \n", "path_save_results = \"{}_results\".format(save_path)\n", "\n", "# load your agent (a bit technical to know exactly what to import and how to use it)\n", "# this is why we made the \"evaluate\" function that simplifies greatly the process.\n", "from l2rpn_baselines.DuelQSimple import DuelQSimple\n", "from l2rpn_baselines.DuelQSimple import DuelQ_NNParam\n", "path_model, path_target_model = DuelQ_NNParam.get_path_model(save_path, agent_name)\n", "nn_archi = DuelQ_NNParam.from_json(os.path.join(path_model, \"nn_architecture.json\"))\n", "\n", "\n", "my_agent = DuelQSimple(env.action_space, nn_archi, name=agent_name)\n", "my_agent.load(save_path)\n", "my_agent.init_obs_extraction(env)\n", "\n", "# here we do that to limit the time take, and will only assess the performance on \"max_iter\" iteration\n", "dict_params = env.get_params_for_runner()\n", "dict_params[\"gridStateclass_kwargs\"][\"max_iter\"] = max_iter\n", "# make a runner from an intialized environment\n", "runner = Runner(**dict_params, agentClass=None, agentInstance=my_agent)\n", "\n", "# run the episode\n", "res = runner.run(nb_episode=2, path_save=path_save_results, pbar=tqdm)\n", "print(\"The results for the trained agent are:\")\n", "for _, chron_name, cum_reward, nb_time_step, max_ts in res:\n", " msg_tmp = \"\\tFor chronics located at {}\\n\".format(chron_name)\n", " msg_tmp += \"\\t\\t - total score: {:.6f}\\n\".format(cum_reward)\n", " msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n", " print(msg_tmp)\n", "```\n", "*NB* In this case, the Runner will use a \"scoring function\" that might be different from the \"reward function\" used during training. In our case, We use the `L2RPNReward` function for both training and evaluating. This is not mandatory.\n", "\n", "But there is, and this is a requirement for all \"baselines\" an \"evaluate\" function that does precisely that. We will use that in this notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from l2rpn_baselines.DuelQSimple import evaluate\n", "path_save_results = \"{}_results\".format(save_path)\n", "\n", "evaluated_agent, res_runner = evaluate(env,\n", " name=agent_name,\n", " load_path=save_path,\n", " logs_path=path_save_results,\n", " nb_episode=2,\n", " nb_process=1,\n", " max_steps=100,\n", " verbose=True,\n", " save_gif=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### III.B) Inspect the Agent " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Please refer to the official documentation for more information about the contents of the directory where the data is saved. Note that saving the information is triggered by the \"path_save\" argument of the \"runner.run\" function.\n", "\n", "The information contained in this output will be saved in a structured way and includes :\n", "For each episode :\n", " - \"episode_meta.json\": json file that represents some meta information about:\n", "\n", " - \"backend_type\": the name of the `grid2op.Backend` class used\n", " - \"chronics_max_timestep\": the **maximum** number of timesteps for the chronics used\n", " - \"chronics_path\": the path where the temporal data (chronics) are located\n", " - \"env_type\": the name of the `grid2op.Environment` class used.\n", " - \"grid_path\": the path where the powergrid has been loaded from\n", "\n", " - \"episode_times.json\": json file that gives some information about the total time spent in multiple parts of the runner, mainly the\n", " `grid2op.Agent` (and especially its method `grid2op.Agent.act`) and the\n", " `grid2op.Environment`\n", "\n", " - \"_parameters.json\": json representation of the `grid2op.Parameters` used for this episode\n", " - \"rewards.npy\": numpy 1d-array giving the rewards at each time step. We adopted the convention that the stored\n", " reward at index `i` is the one observed by the agent at time `i` and **NOT** the reward sent by the\n", " `grid2op.Environment` after the action has been taken.\n", " - \"exec_times.npy\": numpy 1d-array giving the execution time for each time step in the episode\n", " - \"actions.npy\": numpy 2d-array giving the actions that have been taken by the `grid2op.Agent`. At row `i` of \"actions.npy\" is a\n", " vectorized representation of the action performed by the agent at timestep `i` *ie.* **after** having observed\n", " the observation present at row `i` of \"observation.npy\" and the reward showed in row `i` of \"rewards.npy\".\n", " - \"disc_lines.npy\": numpy 2d-array that tells which lines have been disconnected during the simulation of the cascading failure at each\n", " time step. The same convention has been adopted for \"rewards.npy\". This means that the powerlines are\n", " disconnected when the `grid2op.Agent` takes the `grid2op.Action` at time step `i`.\n", " - \"observations.npy\": numpy 2d-array representing the `grid2op.Observation` at the disposal of the\n", " `grid2op.Agent` when he took his action." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can first look at the repository were the data is stored:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "os.listdir(path_save_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see there are 2 folders, each corresponding to a chronics.\n", "There are also additional json files.\n", "\n", "Now let's see what is inside one of these folders:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.listdir(os.path.join(path_save_results, \"0\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For example, we can load the actions chosen by the Agent, and have a look at them.\n", "\n", "To do that, we will load the action array and use the `action_space` function to convert it back to `Action` objects." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from grid2op.Episode import EpisodeData\n", "this_episode = EpisodeData.from_disk(path_save_results, name=\"0\")\n", "all_actions = this_episode.get_actions()\n", "li_actions = []\n", "for i in range(all_actions.shape[0]):\n", " try:\n", " tmp = runner.env.action_space.from_vect(all_actions[i,:])\n", " li_actions.append(tmp)\n", " except:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This allows us to have a deeper look at the actions, and their effects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will inspect the actions that has been taken by the agent :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "line_disc = 0\n", "line_reco = 0\n", "for act in li_actions:\n", " dict_ = act.as_dict()\n", " if \"set_line_status\" in dict_:\n", " line_reco += dict_[\"set_line_status\"][\"nb_connected\"]\n", " line_disc += dict_[\"set_line_status\"][\"nb_disconnected\"]\n", "print(f'Total reconnected lines : {line_reco}')\n", "print(f'Total disconnected lines : {line_disc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, during this episode, our agent never tries to disconnect or reconnect a line." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also analyse the observations of the recorded episode :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "all_observations = this_episode.get_observations()\n", "li_observations = []\n", "nb_real_disc = 0\n", "for i in range(all_observations.shape[0]):\n", " try:\n", " tmp = runner.env.observation_space.from_vect(all_observations[i,:])\n", " li_observations.append(tmp)\n", " nb_real_disc += (np.sum(tmp.line_status == False))\n", " except:\n", " break\n", "print(f'Total number of disconnected powerlines cumulated over all the timesteps : {nb_real_disc}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also look at the kind of actions that the agent chose:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "actions_count = {}\n", "for act in li_actions:\n", " act_as_vect = tuple(act.to_vect())\n", " if not act_as_vect in actions_count:\n", " actions_count[act_as_vect] = 0\n", " actions_count[act_as_vect] += 1\n", "print(\"The agent did {} different valid actions:\\n\".format(len(actions_count)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The actions chosen by the agent were :" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "all_act = np.array(list(actions_count.keys()))\n", "for act in all_act:\n", " print(runner.env.action_space.from_vect(act))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## IV) Improve your Agent " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we saw, the agent we developped was not really interesting. To improve it, we could think about:\n", "\n", "- a better encoding of the observation. For now everything, the whole observation is fed to the neural network, and we do not try to modify it at all. This is a real problem for learning algorithms.\n", "- a better neural network architecture (as said, we didn't pay any attention to it in our model)\n", "- training it for a longer time\n", "- adapting the learning rate and all the hyper-parameters of the learning algorithm.\n", "- etc.\n", "\n", "In this notebook, we will focus on changing the observation representation, by only feeding to the agent a part of the available information.\n", "\n", "To do so, the only thing that we need to do is to modify the way the observation is converted in the `convert_obs` method, and that is it. Nothing else needs to be changed. Here for example, we could think of only using the flow ratio (i.e., the current flow divided by the thermal limit, named rho) instead of feeding the whole observation to the agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can reuse the generic method provided by l2rpn_baselines to train it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_iter2 = int(1.5*train_iter) # train longer\n", "\n", "agent_name2 = \"{}_2\".format(agent_name)\n", "save_path2 = \"saved_agent2_DDDQN_{}\".format(train_iter2)\n", "logs_dir2=\"tf_logs_DDDQN\"\n", "\n", "# we then define the neural network we want to make (you may change this at will)\n", "## 1. first we choose what \"part\" of the observation we want as input, \n", "## here for example only the generator and load information\n", "## see https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes\n", "## for the detailed about all the observation attributes you want to have\n", "li_attr_obs_X2 = [\"prod_p\", \"prod_v\", \"load_p\", \"load_q\", \"rho\", \"topo_vect\"] # add some more information\n", "# this automatically computes the size of the resulting vector\n", "observation_size2 = NNParam.get_obs_size(env, li_attr_obs_X2) \n", "\n", "## 2. then we define its architecture\n", "sizes2 = [500, 500, 500, 500] # 3 put a bigger network (both deeper and wider)\n", "activs2 = [\"relu\" for _ in sizes] # all followed by relu activation, because... why not\n", "## 4. you put it all on a dictionnary like that (specific to this baseline)\n", "kwargs_archi2 = {'observation_size': observation_size2,\n", " 'sizes': sizes2,\n", " 'activs': activs2,\n", " \"list_attr_obs\": li_attr_obs_X2}\n", "\n", "# you can also change the training parameters you are using\n", "# more information at https://l2rpn-baselines.readthedocs.io/en/latest/utils.html#l2rpn_baselines.utils.TrainingParam\n", "tp2 = TrainingParam()\n", "tp2.batch_size = 32 # for example...\n", "tp2.update_tensorboard_freq = int(train_iter2 / 10)\n", "tp2.save_model_each = int(train_iter2 / 3)\n", "tp2.min_observation = int(train_iter2 / 5)\n", "train(env,\n", " name=agent_name2,\n", " iterations=train_iter2,\n", " save_path=save_path2,\n", " load_path=None, # put something else if you want to reload an agent instead of creating a new one\n", " logs_dir=logs_dir,\n", " kwargs_archi=kwargs_archi2,\n", " training_param=tp2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we re-use the code that we wrote earlier to assess its performance." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "path_save_results2 = \"{}_results\".format(save_path2)\n", "\n", "evaluated_agent2, res_runner2 = evaluate(env,\n", " name=agent_name2,\n", " load_path=save_path2,\n", " logs_path=path_save_results2,\n", " nb_episode=2,\n", " nb_process=1,\n", " max_steps=100,\n", " verbose=True,\n", " save_gif=False)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 2 }