{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Training Agent, action converters and l2rpn_baselines\n", "Try me out interactively with: [![Binder](./img/badge_logo.svg)](https://mybinder.org/v2/gh/rte-france/Grid2Op/master)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is recommended to have a look at the [0_basic_functionalities](0_basic_functionalities.ipynb), [1_Observation_Agents](1_Observation_Agents.ipynb) and [2_Action_GridManipulation](2_Action_GridManipulation.ipynb) notebooks before getting into this one." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Objectives**\n", "\n", "In this notebook we will expose :\n", "* how to use the \"converters\": these allow to link several different representations of the actions (for example as `Action` objects or integers).\n", "* how to train a (naive) Agent using reinforcement learning.\n", "* how to inspect (rapidly) the action taken by the Agent.\n", "\n", "**NB** In this tutorial, we train an Agent inspired from this blog post: [deep-reinforcement-learning-tutorial-with-open-ai-gym](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). Many other different reinforcement learning tutorials exist. The code presented in this notebook only aims at demonstrating how to use the Grid2Op functionalities to train a Deep Reinforcement learning Agent and inspect its behaviour, but not at building a very smart agent. Nothing about the performance, training strategy, type of Agent, meta parameters, etc, should be retained as a common practice.\n", "\n", "**Don't hesitate to check the grid2op module grid2op.gym_compat for a closer integration between grid2op and openAI gym.** This topic is not covered in this notebook." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import grid2op" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
run previous cell, wait for 2 seconds
\n", "" ], "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res = None\n", "try:\n", " from jyquickhelper import add_notebook_menu\n", " res = add_notebook_menu()\n", "except ModuleNotFoundError:\n", " print(\"Impossible to automatically add a menu / table of content to this notebook.\\nYou can download \\\"jyquickhelper\\\" package with: \\n\\\"pip install jyquickhelper\\\"\")\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## I) Manipulating action representation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Grid2op package has been built with an \"object-oriented\" perspective: almost everything is encapsulated in a dedicated `class`. This allows for more customization of the plateform.\n", "\n", "The downside of this approach is that machine learning methods, especially in deep learning, often prefer to deal with vectors rather than with \"complex\" objects. Indeed, as we covered in the previous tutorials on the platform, we saw that building our own actions can be tedious and can sometime require important knowledge of the powergrid.\n", "\n", "On the contrary, in most of the standard Reinforcement Learning environments, actions have a higher representation. For example in pacman, there are 4 different types of actions: turn left, turn right, go up and do down. This allows for easy sampling (if you need to achieve an uniform sampling, you simply need to randomly pick a number between 0 and 3 included) and an easy representation: each action can be represented as a different component of a vector of dimension 4 [because there are 4 actions]. \n", "\n", "On the other hand, this representation is not \"human friendly\". It is quite convenient in the case of pacman because the action space is rather small, making it possible to remember which action corresponds to which component, but in the case of the grid2op package, there are hundreds or even thousands of actions, making it impossible to remember which component corresponds to which action. We suppose that we do not really care about this here, as tutorials on Reinforcement Learning with discrete action space often assume that actions are labeled with integers (such as in pacman for example).\n", "\n", "However, to allow RL agent to train more easily, we allow to make some \"[Converters](https://grid2op.readthedocs.io/en/latest/converters.html)\" whose roles are to allow an agent to deal with a custom representation of the action space. The class [AgentWithConverter](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) is perfect for such usage." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/benjamin/Documents/grid2op_dev/getting_started/grid2op/MakeEnv/Make.py:269: UserWarning: You are using a development environment. This environment is not intended for training agents. It might not be up to date and its primary use if for tests (hence the \"test=True\" you passed as argument). Use at your own risk.\n", " warnings.warn(_MAKE_DEV_ENV_WARN)\n" ] } ], "source": [ "# import the usefull class\n", "import numpy as np\n", "\n", "from grid2op import make\n", "from grid2op.Agent import RandomAgent \n", "max_iter = 100 # to make computation much faster we will only consider 50 time steps instead of 287\n", "train_iter = 200\n", "max_eval_step = 20\n", "env_name = \"rte_case14_redisp\"\n", "env = make(env_name, test=True)\n", "env.seed(0) # this is to ensure all sources of randomness in the environment are reproducible\n", "my_agent = RandomAgent(env.action_space)\n", "my_agent.seed(0) # this is to ensure that all actions made by this random agent will be the same" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And that's it. This agent will be able to perform any action, but instead of going through the description of the actions from a powersystem point of view (ie setting what is connected to what, what is disconnected etc.) it will simply choose an integer with the method `my_act`. This integer will then be converted back to a proper action.\n", "\n", "Here is an example of the action representation as seen by the Agent (here, integers):" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "172\n", "47\n", "117\n" ] } ], "source": [ "for el in range(3):\n", " print(my_agent.my_act(None, None))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below you can see that the `act` function behaves as expected, handling proper `Action` objects:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This action will:\n", "\t - NOT change anything to the injections\n", "\t - NOT perform any redispatching action\n", "\t - NOT force any line status\n", "\t - NOT switch any line status\n", "\t - NOT switch anything in the topology\n", "\t - Set the bus of the following element:\n", "\t \t - assign bus 2 to line (extremity) 1 [on substation 4]\n", "\t \t - assign bus 2 to line (extremity) 9 [on substation 4]\n", "\t \t - assign bus 1 to line (extremity) 11 [on substation 4]\n", "\t \t - assign bus 2 to line (origin) 17 [on substation 4]\n", "\t \t - assign bus 2 to load 4 [on substation 4]\n", "This action will:\n", "\t - NOT change anything to the injections\n", "\t - NOT perform any redispatching action\n", "\t - NOT force any line status\n", "\t - NOT switch any line status\n", "\t - Change the bus of the following element:\n", "\t \t - switch bus of line (origin) 11 [on substation 3]\n", "\t \t - switch bus of line (origin) 15 [on substation 3]\n", "\t - NOT force any particular bus configuration\n", "This action will:\n", "\t - NOT change anything to the injections\n", "\t - NOT perform any redispatching action\n", "\t - NOT force any line status\n", "\t - NOT switch any line status\n", "\t - NOT switch anything in the topology\n", "\t - Set the bus of the following element:\n", "\t \t - assign bus 2 to line (origin) 2 [on substation 8]\n", "\t \t - assign bus 1 to line (origin) 3 [on substation 8]\n", "\t \t - assign bus 1 to line (extremity) 16 [on substation 8]\n", "\t \t - assign bus 1 to line (extremity) 19 [on substation 8]\n", "\t \t - assign bus 1 to load 6 [on substation 8]\n" ] } ], "source": [ "for el in range(3):\n", " print(my_agent.act(None, None))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**NB** lots of these actions are equivalent to the \"do nothing\" action at some point. For example, trying to reconnect a powerline that is already connected will not do anything. The same for topology. If everything is already connected to bus 1, then the action to connect things to bus 1 on the same substation will not affect the powergrid." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## II) Training an Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this tutorial, we will show how to build a Q-learning Agent. Most of the code originated from this blog post (which was deleted) [https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). \n", "\n", "The goal of this notebook is to demonstrate how to train an agent using grid2op framework. The key message is: since grid2op fully implements the gym API, it is rather easy to do. We will use the [l2rpn baselines](https://github.com/rte-france/l2rpn-baselines) repository and implement a Double Dueling Deep Q learning Algorithm. For more information, you can look for the code in the dedicated repository [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DoubleDuelingDQN).\n", "\n", "**Requirements** This notebook requires having `keras` installed on your machine as well as the `l2rpn_baselines` repository.\n", "\n", "As always in these notebooks, we will use the `rte_case14_realistic` test Environment. More data is available if you don't pass the `test=True` parameters." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.A) Defining some \"helpers\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The type of Agent we are using requires a bit of setup, independantly of Grid2Op. We will reuse the code shown in \n", "[https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368) and in [Reinforcement-Learning-Tutorial](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial) from Abhinav Sagar. The code is registered under the *MIT license* found here: [MIT License](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial/blob/master/LICENSE).\n", "\n", "This first section aims at defining these classes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will need to install the l2rpn_baselines library. Since this library is uploaded on Pypi this can be done easily." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "To install l2rpn_baselines, either uncomment the cell below, or type, in a command prompt:\n", "\t/usr/bin/python3 -m pip install l2rpn_baselines\n" ] } ], "source": [ "print(\"To install l2rpn_baselines, either uncomment the cell below, or type, in a command prompt:\\n{}\".format(\n", " (\"\\t{} -m pip install l2rpn_baselines\".format(sys.executable))))" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# !$sys.executable -m pip install l2rpn_baselines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "But first let's import the necessary dependencies :" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "#tf2.0 friendly\n", "import numpy as np\n", "import random\n", "import warnings\n", "import l2rpn_baselines" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.B) Adaptation of the inputs\n", "\n", "For many of the Deep Reinforcement Learning problems (for example for model used to play Atari games), the inputs are images and the outputs are integers that encode the different possible actions (typically \"move up\" or \"move down\" in Atari). In our system (the powergrid) it is rather different. We did our best to make the convertion between complex and simple structures easy. Indeed, the use of converters such as ([IdToAct](https://grid2op.readthedocs.io/en/latest/converter.html#grid2op.Converter.IdToAct)) allows easily to:\n", "- convert the class \"Observation\" into vectors automatically\n", "- map the actions from integers to complete `Action` objects defined in the previous notebooks\n", "\n", "In essence, a converter allows to manipulate the \"action space\" of the Agent and is such that:\n", "- the Agent manipulates a simple, custom structure of actions\n", "- the Converter takes care of mapping from this simple structure to complex grid2op Action / Observation objects\n", "- as a whole, the Agent will actually return full Action / Observation objects to the environment while only working with more simple objects\n", "\n", "\n", "#### A note on the converter\n", "To use this converter, the Agent must inherit from the class [`grid2op.Agent.AgentWithConverter`](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) and implement the following interface (shown here as an example):\n", "\n", "\n", "```python\n", "from grid2op.Agent import AgentWithConverter\n", "class MyAgent(AgentWithConverter):\n", " def __init__(self, action_space, action_space_converter=None):\n", " super(MyAgent, self).__init__(action_space=action_space, action_space_converter=action_space_converter)\n", " # for example you can define here all the actions you will consider\n", " self.my_actions = [action_space(),\n", " action_space({\"redispatching\": [0,+1]}),\n", " action_space({\"set_line_status\": [(0,-1)]}),\n", " action_space({\"change_bus\": {\"lines_or_id\": [12]}}),\n", " ...\n", " ]\n", " # or load them from a file for example...\n", " # self.my_action = np.load(\"my_action_pre_selected.npy\")\n", " \n", " # you can also in this agent load a neural network...\n", " self.my_nn_model = model.load(\"my_saved_neural_network_weights.h5\")\n", " \n", " def convert_obs(self, observation):\n", " \"\"\"\n", " This method is used to convert the observation, represented as a class Observation in input\n", " into a \"transformed_observation\" that will be manipulated by the agent\n", " An example here will transform the observation into a numpy array.\n", " \n", " It is recommended to modify it to suit your needs.\n", " \n", " \"\"\"\n", " return observation.to_vect()\n", " \n", " def convert_act(self, encoded_act):\n", " \"\"\"\n", " This method will take an \"encoded_act\" (for example a integer) into a valid grid2op action.\n", " \"\"\"\n", " if encoded_act < 0 or encoded_act > len(self.my_action):\n", " raise RuntimeError(\"Invalid action with id {}\".format(encoded_act))\n", " return self.my_actions[encoded_act]\n", " \n", " def my_act(self, transformed_observation, reward, done=False):\n", " \"\"\"\n", " This is the main function where you can take your decision.\n", " \n", " Instead of:\n", " - calling \"act(observation, reward, done)\" you implement \n", " \"my_act(transformed_observation, reward, done)\"\n", " - this manipulates only \"transformed_observation\" fully flexible as you defined \"convert_obs\"\n", " - and returns \"encoded_action\" that are then digest automatically by \n", " \"convert_act(encoded_act)\" and to return valid actions.\n", " \n", " Here we suppose, as many dqn agent, that `my_nn_model` return a vector of size \n", " nb_actions filled with number between 0 and 1 and we take the action given the highest score\n", " \"\"\"\n", " pred_score = self.my_nn_model.predict(transformed_observation, reward, done)\n", " res = np.argmax(pred_score)\n", " return res\n", "```\n", "And that's it. There is nothing else to do, your agent is ready to learn how to control a powergrid using only these 3 functions.\n", "\n", "\n", "**NB** A few things are worth noting:\n", "- if you use an agent with a converter, do not modify the method **act** but rather change the method **my_act**, this is really important !\n", "- the method **my_act**, which you will have to implement, takes a transformed observation as an argument and must return a transformed action. This transformed action will then be converted back to the original action space to an actual `Action` object in the **act** method, that you must leave unchanged.\n", "- some automatic functions can compute the set of all possible actions, so there is no need to write \"self.my_actions = ...\". This was done as an example only.\n", "- if the converter is properly set up, you don't even need to modify \"convert_obs(self, observation)\" and \"convert_act(self, encoded_act)\" as default convertions are already provided in the implementation. However, if you want to work with a custom observation space and action space, you can modify these two methods in order to have the convertions that you need." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we consider the observation as a whole and do not try any modifications of the features. This means that the vector (the observation) that the agent will receive is going to be really big, not scaled and filled with a lot of information that may not be really useful. It could be tried to select only a subset of the available features and apply a pre-processing function to them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### II.C) Writing the code of the Agent and train it" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### a) Code of the agent\n", "\n", "Here we show the most interesting part (in this tutorial) of the code that is implemented as a baseline. For a full description of the code, you can go take a look [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DuelQSimple).\n", "\n", "This is the `DuelQ_NN.py` file (we copy pasted the function defined in its base class `l2rpn_baselines.utils.BaseDeepQ.py` for clarity):\n", "\n", "```python\n", "import tensorflow.keras as tfk\n", "class DuelQ_NN(BaseDeepQ):\n", " \"\"\"Constructs the desired deep q learning network\"\"\"\n", " def __init__(self,\n", " nn_params, # neural network meta parameters, defining its architecture\n", " training_param=None # training scheme (learning rate, learning rate decay, etc.)\n", " ):\n", " self.action_size = action_size\n", " self.observation_size = observation_size\n", " HIDDEN_FOR_SIMPLICITY\n", "\n", " def construct_q_network(self):\n", " \"\"\"\n", " It uses the architecture defined in the `nn_archi` attributes.\n", "\n", " \"\"\"\n", " self.model = Sequential()\n", " input_layer = Input(shape=(self.nn_archi.observation_size,),\n", " name=\"observation\")\n", "\n", " lay = input_layer\n", " for lay_num, (size, act) in enumerate(zip(self.nn_archi.sizes, self.nn_archi.activs)):\n", " lay = Dense(size, name=\"layer_{}\".format(lay_num))(lay) # put at self.action_size\n", " lay = Activation(act)(lay)\n", "\n", " fc1 = Dense(self.action_size)(lay)\n", " advantage = Dense(self.action_size, name=\"advantage\")(fc1)\n", "\n", " fc2 = Dense(self.action_size)(lay)\n", " value = Dense(1, name=\"value\")(fc2)\n", "\n", " meaner = Lambda(lambda x: K.mean(x, axis=1))\n", " mn_ = meaner(advantage)\n", " tmp = subtract([advantage, mn_])\n", " policy = add([tmp, value], name=\"policy\")\n", "\n", " self.model = Model(inputs=[input_layer], outputs=[policy])\n", " self.schedule_model, self.optimizer_model = self.make_optimiser()\n", " self.model.compile(loss='mse', optimizer=self.optimizer_model)\n", "\n", " self.target_model = Model(inputs=[input_layer], outputs=[policy])\n", " \n", " def predict_movement(self, data, epsilon, batch_size=None):\n", " \"\"\"\n", " Predict movement of game controler where is epsilon\n", " probability randomly move.\n", " \n", " This was part of the BaseDeepQ and was copy pasted without any modifications\n", " \"\"\"\n", " if batch_size is None:\n", " batch_size = data.shape[0]\n", "\n", " rand_val = np.random.random(batch_size)\n", " q_actions = self.model.predict(data, batch_size=batch_size)\n", "\n", " opt_policy = np.argmax(np.abs(q_actions), axis=-1)\n", " opt_policy[rand_val < epsilon] = np.random.randint(0, self.action_size, size=(np.sum(rand_val < epsilon)))\n", " return opt_policy, q_actions[0, opt_policy]\n", "```\n", "\n", "This is the `DoubleDuelingDQN.py` file:\n", "```python\n", "from grid2op.Agent import AgentWithConverter # all converter agent should inherit this\n", "from grid2op.Converter import IdToAct # this is the automatic converter to convert action given as ID (integer)\n", "# to valid grid2op action (in particular it is able to compute all actions).\n", "\n", "from l2rpn_baselines.DoubleDuelingDQN.DoubleDuelingDQN_NN import DoubleDuelingDQN_NN\n", "class DuelQSimple(DeepQAgent):\n", " def __init__(self,\n", " action_space,\n", " nn_archi,\n", " HIDDEN_FOR_SIMPLICITY\n", " ):\n", " ...\n", " HIDDEN_FOR_SIMPLICITY\n", " ...\n", " # Load network graph\n", " self.deep_q = None ## is loaded when first called or when explicitly loaded\n", " \n", " ## Agent Interface\n", " def convert_obs(self, observation):\n", " \"\"\"\n", " Generic way to convert an observation. This transform it to a vector and the select the attributes\n", " that were selected in :attr:`l2rpn_baselines.utils.NNParams.list_attr_obs` (that have been\n", " extracted once and for all in the :attr:`DeepQAgent._indx_obs` vector).\n", " \"\"\"\n", " obs_as_vect = observation.to_vect()\n", " self._tmp_obs[:] = obs_as_vect[self._indx_obs]\n", " return self._tmp_obs\n", "\n", " def convert_act(self, action):\n", " \"\"\"\n", " calling the convert_act method of the base class.\n", " This is not mandatory as this is the standard behaviour in OOP (object oriented programming)\n", " We only show it here as illustration.\n", " \"\"\"\n", " return super().convert_act(action)\n", " \n", " def my_act(self, transformed_observation, reward, done=False):\n", " \"\"\"\n", " This function will return the action (its id) selected by the underlying \n", " :attr:`DeepQAgent.deep_q` network.\n", " \n", " \n", " Before being used, this method require that the :attr:`DeepQAgent.deep_q` is created. \n", " To that end a call to :func:`DeepQAgent.init_deep_q` needs to have been performed \n", " (this is automatically done if you use baseline we provide and their `evaluate` \n", " and `train` scripts).\n", " \"\"\"\n", " predict_movement_int, *_ = self.deep_q.predict_movement(transformed_observation, epsilon=0.0)\n", " res = int(predict_movement_int)\n", " self._store_action_played(res)\n", " return res\n", "```\n", "\n", "#### b) Training the model\n", "\n", "Now we can define the agent and train it.\n", "\n", "To that extent, we will use the \"train\" method provided in the `l2rpn_baselines` repository.\n", "\n", "**NB** The code below can take a few minutes to run. It's training a Deep Reinforcement Learning Agent after all. It this takes too long on your machine, you can always decrease the \"nb_frame\", and set it to 1000 for example. In this case, the Agent will probably not be really good.\n", "\n", "**NB** It would take much longer to train a good agent." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/home/benjamin/Documents/grid2op_dev/getting_started/grid2op/MakeEnv/Make.py:269: UserWarning: You are using a development environment. This environment is not intended for training agents. It might not be up to date and its primary use if for tests (hence the \"test=True\" you passed as argument). Use at your own risk.\n", " warnings.warn(_MAKE_DEV_ENV_WARN)\n", "100%|██████████| 1000/1000 [00:37<00:00, 26.66it/s]\n" ] } ], "source": [ "# create an environment\n", "env = make(env_name, test=True) \n", "# don't forget to set \"test=False\" (or remove it, as False is the default value) for \"real\" training\n", "\n", "# import the train function and train your agent\n", "from l2rpn_baselines.DuelQSimple import train\n", "from l2rpn_baselines.utils import NNParam, TrainingParam\n", "agent_name = \"test_agent\"\n", "save_path = \"saved_agent_DDDQN_{}\".format(train_iter)\n", "logs_dir=\"tf_logs_DDDQN\"\n", "\n", "\n", "# we then define the neural network we want to make (you may change this at will)\n", "## 1. first we choose what \"part\" of the observation we want as input, \n", "## here for example only the generator and load information\n", "## see https://grid2op.readthedocs.io/en/latest/observation.html#main-observation-attributes\n", "## for the detailed about all the observation attributes you want to have\n", "li_attr_obs_X = [\"prod_p\", \"prod_v\", \"load_p\", \"load_q\"]\n", "# this automatically computes the size of the resulting vector\n", "observation_size = NNParam.get_obs_size(env, li_attr_obs_X) \n", "\n", "## 2. then we define its architecture\n", "sizes = [300, 300, 300] # 3 hidden layers, of 300 units each, why not...\n", "activs = [\"relu\" for _ in sizes] # all followed by relu activation, because... why not\n", "## 4. you put it all on a dictionnary like that (specific to this baseline)\n", "kwargs_archi = {'observation_size': observation_size,\n", " 'sizes': sizes,\n", " 'activs': activs,\n", " \"list_attr_obs\": li_attr_obs_X}\n", "\n", "# you can also change the training parameters you are using\n", "# more information at https://l2rpn-baselines.readthedocs.io/en/latest/utils.html#l2rpn_baselines.utils.TrainingParam\n", "tp = TrainingParam()\n", "tp.batch_size = 32 # for example...\n", "tp.update_tensorboard_freq = int(train_iter / 10)\n", "tp.save_model_each = int(train_iter / 3)\n", "tp.min_observation = int(train_iter / 5)\n", "train(env,\n", " name=agent_name,\n", " iterations=train_iter,\n", " save_path=save_path,\n", " load_path=None, # put something else if you want to reload an agent instead of creating a new one\n", " logs_dir=logs_dir,\n", " kwargs_archi=kwargs_archi,\n", " training_param=tp)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'saved_agent_DDDQN_1000'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "save_path" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Logs are saved in the \"tf_logs_DDDQN\" log repository. To watch what happens during training, you can type the command (from a bash command line for example):\n", "```\n", "tensorboard --logdir='tf_logs_DDDQN'\n", "```\n", "You can even do it while it's training. Tensorboard allows you to monitor, during training, different quantities, for example the loss of your neural network or even the last number of steps the agent performed before getting a game over etc.\n", "\n", "At first glimpse here is what it could look like (only the first graph out of :\n", "\n", "\n", "Monitoring of the training | Representation as a graph of the neural network\n", ":-------------------------:|:-------------------------:\n", "![](./img/tensorboard_example.png) | ![](./img/tensorboard_graph.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## III) Evaluating the Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, it is time to test the agent that we trained.\n", "\n", "To do that, we have multiple choices.\n", "\n", "We can either re-code the \"DeepQAgent\" class to load the stored weights (that have been saved during training) when it is initialized (this is not covered in this notebook), or we can also directly specify the instance of the Agent to use in the Grid2Op Runner.\n", "\n", "Doing this is fairly simple. First, you need to specify that you won't use the \"*agentClass*\" argument by setting it to ``None``, and secondly you simply have to provide the agent instance to be used as the *agentInstance* argument.\n", "\n", "**NB** If you don't do that, the Runner will be created (the constructor will raise an exception). If you choose to feed the \"*agentClass*\" argument with a class, your agent will be re-instanciated from scratch with this class. If you do re-instanciate your agent and if it is not planned in the class to load pre-trained weights, then **the agent will not be pre-trained** and will be unlikely to perform well on the task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### III.A) Evaluate the Agent" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have \"successfully\" trained our Agent, we will evaluate it. The evaluation can be done classically using a standard Runner with the following code:\n", "\n", "```python\n", "from grid2op.Runner import Runner\n", "from tqdm.notebook import tqdm\n", "# chose a scoring function (might be different from the reward you use to train your agent)\n", "from grid2op.Reward import L2RPNReward\n", "scoring_function = L2RPNReward \n", "path_save_results = \"{}_results\".format(save_path)\n", "\n", "# load your agent (a bit technical to know exactly what to import and how to use it)\n", "# this is why we made the \"evaluate\" function that simplifies greatly the process.\n", "from l2rpn_baselines.DuelQSimple import DuelQSimple\n", "from l2rpn_baselines.DuelQSimple import DuelQ_NNParam\n", "path_model, path_target_model = DuelQ_NNParam.get_path_model(save_path, agent_name)\n", "nn_archi = DuelQ_NNParam.from_json(os.path.join(path_model, \"nn_architecture.json\"))\n", "\n", "\n", "my_agent = DuelQSimple(env.action_space, nn_archi, name=agent_name)\n", "my_agent.load(save_path)\n", "my_agent.init_obs_extraction(env)\n", "\n", "# here we do that to limit the time take, and will only assess the performance on \"max_iter\" iteration\n", "dict_params = env.get_params_for_runner()\n", "dict_params[\"gridStateclass_kwargs\"][\"max_iter\"] = max_iter\n", "# make a runner from an intialized environment\n", "runner = Runner(**dict_params, agentClass=None, agentInstance=my_agent)\n", "\n", "# run the episode\n", "res = runner.run(nb_episode=2, path_save=path_save_results, pbar=tqdm)\n", "print(\"The results for the trained agent are:\")\n", "for _, chron_name, cum_reward, nb_time_step, max_ts in res:\n", " msg_tmp = \"\\tFor chronics located at {}\\n\".format(chron_name)\n", " msg_tmp += \"\\t\\t - total score: {:.6f}\\n\".format(cum_reward)\n", " msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n", " print(msg_tmp)\n", "```\n", "*NB* In this case, the Runner will use a \"scoring function\" that might be different from the \"reward function\" used during training. In our case, We use the `L2RPNReward` function for both training and evaluating. This is not mandatory.\n", "\n", "But there is, and this is a requirement for all \"baselines\" an \"evaluate\" function that does precisely that. We will use that in this notebook." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "\r", "episode: 0%| | 0/2 [00:00