{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "kzN-Q9Zv1He1" }, "source": [ "# DQN\n", "\n", "The goal of this exercise is to implement DQN and to apply it to the cartpole balancing problem. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " import google.colab\n", " IN_COLAB = True\n", "except:\n", " IN_COLAB = False\n", "\n", "if IN_COLAB:\n", " !pip install -U gymnasium pygame moviepy" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ZuVpP0LaxKM5", "outputId": "948030f3-cdfb-43a1-882a-6140df11639b" }, "outputs": [], "source": [ "import numpy as np\n", "rng = np.random.default_rng()\n", "import matplotlib.pyplot as plt\n", "import os\n", "from IPython.display import clear_output\n", "from collections import deque\n", "\n", "import gymnasium as gym\n", "print(\"gym version:\", gym.__version__)\n", "\n", "import pygame\n", "from moviepy.editor import ImageSequenceClip, ipython_display\n", "\n", "import tensorflow as tf\n", "import logging\n", "tf.get_logger().setLevel(logging.ERROR)\n", "\n", "class GymRecorder(object):\n", " \"\"\"\n", " Simple wrapper over moviepy to generate a .gif with the frames of a gym environment.\n", " \n", " The environment must have the render_mode `rgb_array_list`.\n", " \"\"\"\n", " def __init__(self, env):\n", " self.env = env\n", " self._frames = []\n", "\n", " def record(self, frames):\n", " \"To be called at the end of an episode.\"\n", " for frame in frames:\n", " self._frames.append(np.array(frame))\n", "\n", " def make_video(self, filename):\n", " \"Generates the gif video.\"\n", " directory = os.path.dirname(os.path.abspath(filename))\n", " if not os.path.exists(directory):\n", " os.mkdir(directory)\n", " self.clip = ImageSequenceClip(list(self._frames), fps=self.env.metadata[\"render_fps\"])\n", " self.clip.write_gif(filename, fps=self.env.metadata[\"render_fps\"], loop=0)\n", " del self._frames\n", " self._frames = []\n", "\n", "def running_average(x, N):\n", " kernel = np.ones(N) / N\n", " return np.convolve(x, kernel, mode='same')" ] }, { "cell_type": "markdown", "metadata": { "id": "EPakRvKRoA79" }, "source": [ "## Cartpole balancing task\n", "\n", "We are going to use the Cartpole balancing problem, which can be loaded with:\n", "\n", "```python\n", "gym.make('CartPole-v0')\n", "```\n", "\n", "States have 4 continuous values (position and speed of the cart, angle and speed of the pole) and 2 discrete outputs (going left or right). The reward is +1 for each transition where the pole is still standing (angle of less than 30° with the vertical). \n", "\n", "In CartPole-v0, the episode ends when the pole fails or after 200 steps. In CartPole-v1, the maximum episode length is 500 steps, which is too long for us, so we stick to v0 here.\n", "\n", "The maximal (undiscounted) return is therefore 200. Can DQN learn this?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 438 }, "id": "zBkpg0MDoIxJ", "outputId": "58411c0e-4248-4e15-9f5e-b284aeea321e" }, "outputs": [], "source": [ "# Create the environment\n", "env = gym.make('CartPole-v0', render_mode=\"rgb_array_list\")\n", "recorder = GymRecorder(env)\n", "\n", "# Sample the initial state\n", "state, info = env.reset()\n", "\n", "# One episode:\n", "done = False\n", "return_episode = 0\n", "while not done:\n", "\n", " # Select an action randomly\n", " action = env.action_space.sample()\n", " \n", " # Sample a single transition\n", " next_state, reward, terminal, truncated, info = env.step(action)\n", "\n", " # End of the episode\n", " done = terminal or truncated\n", "\n", " # Update undiscounted return\n", " return_episode += reward\n", " \n", " # Go in the next state\n", " state = next_state\n", "\n", "print(\"Return:\", return_episode)\n", "\n", "recorder.record(env.render())\n", "video = \"videos/cartpole.gif\"\n", "recorder.make_video(video)\n", "ipython_display(video)" ] }, { "cell_type": "markdown", "metadata": { "id": "_3CIDqP41Wvf" }, "source": [ "As the problem is quite simple (4 state variables, 2 actions), DQN can run on a single CPU. However, we advise that you run the notebook on a GPU in Colab to avoid emptying the battery of your laptop too fast or making it too warm as training takes quite a long time.\n", "\n", "We will stop from now on to display the cartpole on colab, as we want to go fast." ] }, { "cell_type": "markdown", "metadata": { "id": "8hEvKXD1LDCq" }, "source": [ "## Creating the model\n", "\n", "The first step is to create the value network using `keras`. We will not need anything fancy: a simple fully connected network with 4 input neurons, two hidden layers of 64 neurons each and 2 output neurons will do the trick. ReLU activation functions all along and the Adam optimizer.\n", "\n", "**Q:** Which loss function should we use? Think about which arguments have to passed to `model.compile()` and what activation function is required in the output layer.\n", "\n", "We will need to create two identical networks: the trained network and the target network. You should therefore create a method that returns a compiled model, so it can be called two times. You should pass it the environment (so the network can know how many input and output neurons it needs) and the learning rate for the Adam optimizer.\n", "\n", "```python\n", "def create_model(env, lr):\n", " \n", " model = Sequential()\n", "\n", " # ...\n", "\n", " return model\n", "```\n", "\n", "**Q:** Implement the method accordingly." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "h67ZdBDZ6PKL" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "UF4OBQRtOpZZ" }, "source": [ "Let's test this method by creating the trained and target networks.\n", "\n", "**Important:** every time you call `create_model`, a new neural network will be instantiated but the previous ones will not be deleted. During this exercise, you may have to create hundreds of networks because of the incremental implementation of DQN: all networks will stay instantiated in the RAM, and your computer/colab tab will freeze after a while. Before creating new networks, delete all existing ones with:\n", "\n", "```python\n", "tf.keras.backend.clear_session()\n", "```\n", "\n", "**Q:** Create the trained and target networks. The learning rate does not matter for now. Instantiate the Cartpole environment and print the output of both networks for the initial state (`state, info = env.reset()`). Are they the same?\n", "\n", "*Hint:* `model.predict(X, verbose=0)` expects an array X of shape (N, 4), with N the number of examples. Here, we have only one example, so make sure to reshape `state` so it has the shape (1, 4) (otherwise tf will complain)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "IywqFrVTOq8N", "outputId": "5298b903-b14b-4136-d8db-16c74b321199" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "lJ5sZzqfQ2MK" }, "source": [ "The target network has the same structure as the trained network, but not the same weights, as they are randomly initialized. We want the target network $\\theta'$ to have exactly the same weights as the trained weights $\\theta$. You can obtain the weights of a network with:\n", "\n", "```python\n", "w = model.get_weights()\n", "```\n", "\n", "and set weights using:\n", "\n", "```python\n", "model.set_weights(w)\n", "```\n", "\n", "**Q:** Transfer the weights of the trained model to the target model. Compare their predictions for the current state." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VI9b0cmZQ2mr", "outputId": "48ea740d-9abd-4d88-942c-f895c27baa25" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "FUEWBYwUOpxm" }, "source": [ "## Experience replay memory\n", "\n", "The second thing that we need is the experience replay memory (or replay buffer). We need a container like a python list where we append (s, a, r, s', done) transitions (as in Q-learning), but with a maximal capacity: when there are already $C$ transitions in the list, one should stop appending to the list, but rather start writing at the beginning of the list.\n", "\n", "This would not be very hard to write, but it would take a lot of time and the risk is high to have hard-to-notice bugs. \n", "\n", "Here is a basic implementation of the replay buffer using **double-ended queues** (deque). A deque is list with a maximum capacity. If the deque is full, it starts writing again at the beginnning. Exactly what we need. This implementation uses one deque per element in (s, a, r, s', done), but one could also append the whole transition to a single deque.\n", "\n", "**Q:** Read the code of the ReplayBuffer and understand what it does." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hf-CRYjS6PKO" }, "outputs": [], "source": [ "class ReplayBuffer:\n", " \"Basic implementation of the experience replay memory using separated deques.\"\n", " def __init__(self, max_capacity):\n", " self.max_capacity = max_capacity\n", " \n", " # deques for each element\n", " self.states = deque(maxlen=max_capacity)\n", " self.actions = deque(maxlen=max_capacity)\n", " self.rewards = deque(maxlen=max_capacity)\n", " self.next_states = deque(maxlen=max_capacity)\n", " self.dones = deque(maxlen=max_capacity)\n", " \n", " def append(self, state, action, reward, next_state, done):\n", " # Store data\n", " self.states.append(state)\n", " self.actions.append(action)\n", " self.rewards.append(reward)\n", " self.next_states.append(next_state)\n", " self.dones.append(done)\n", " \n", " def sample(self, batch_size):\n", " # Do not return samples if we do not have at least 2*batch_size transitions\n", " if len(self.states) < 2*batch_size: \n", " return []\n", " \n", " # Randomly choose the indices of the samples.\n", " indices = sorted(np.random.choice(np.arange(len(self.states)), batch_size, replace=False))\n", "\n", " # Return the corresponding\n", " return [np.array([self.states[i] for i in indices]), \n", " np.array([self.actions[i] for i in indices]), \n", " np.array([self.rewards[i] for i in indices]), \n", " np.array([self.next_states[i] for i in indices]), \n", " np.array([self.dones[i] for i in indices])]" ] }, { "cell_type": "markdown", "metadata": { "id": "CjmNxom5eftK" }, "source": [ "**Q:** Run a random agent on Cartpole (without rendering) for a few episodes and append each transition to a replay buffer with small capacity (e.g. 100). Sample a batch to check that everything makes sense." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "ajn1ht1dco5N", "outputId": "d95112be-68f0-4681-e354-72625f656b75" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "u29Tw9o6fRcw" }, "source": [ "## DQN agent\n", "\n", "Here starts the fun part. There are a lot of things to do here, but you will now whether it works or not only when everything has been (correctly) implemented. So here is a lot of text to read carefully, and then you are on your own.\n", "\n", "Reminder from the lecture:\n", "\n", "* Initialize value network $Q_{\\theta}$ and target network $Q_{\\theta'}$.\n", "\n", "* Initialize experience replay memory $\\mathcal{D}$ of maximal size $N$.\n", "\n", "* for $t \\in [0, T_\\text{total}]$:\n", "\n", " * Select an action $a_t$ based on $Q_\\theta(s_t, a)$, observe $s_{t+1}$ and $r_{t+1}$.\n", "\n", " * Store $(s_t, a_t, r_{t+1}, s_{t+1})$ in the experience replay memory.\n", "\n", " * Every $T_\\text{train}$ steps:\n", "\n", " * Sample a minibatch $\\mathcal{D}_s$ randomly from $\\mathcal{D}$.\n", "\n", " * For each transition $(s_k, a_k, r_k, s'_k)$ in the minibatch:\n", "\n", " * Compute the target value $t_k = r_k + \\gamma \\, \\max_{a'} Q_{\\theta'}(s'_k, a')$ using the target network.\n", "\n", " * Update the value network $Q_{\\theta}$ on $\\mathcal{D}_s$ to minimize:\n", "\n", " $$\\mathcal{L}(\\theta) = \\mathbb{E}_{\\mathcal{D}_s}[(t_k - Q_\\theta(s_k, a_k))^2]$$\n", "\n", " * Every $T_\\text{target}$ steps:\n", "\n", " * Update target network: $\\theta' \\leftarrow \\theta$.\n", "\n", "Here is the skeleton of the `DQNAgent` class that you have to write:\n", "\n", "```python\n", "class DQNAgent:\n", " \n", " def __init__(self, env, create_model, some_parameters):\n", " \n", " self.env = env\n", " \n", " # TODO: copy the parameters\n", "\n", " # TODO: Create the trained and target networks, copy the weights.\n", "\n", " # TODO: Create an instance of the replay memory\n", " \n", " def act(self, state):\n", "\n", " # TODO: Select an action using epsilon-greedy on the output of the trained model\n", "\n", " return action\n", " \n", " def update(self, batch):\n", " \n", " # TODO: train the model using the batch of transitions\n", " \n", " return loss # mse on the batch\n", "\n", " def train(self, nb_episodes):\n", "\n", " returns = []\n", " losses = []\n", "\n", " # TODO: Train the network for the given number of episodes\n", "\n", " return returns, losses\n", "\n", " def test(self):\n", "\n", " # TODO: one episode with epsilon temporarily set to 0\n", "\n", " return nb_steps # Should be 200 after learning\n", "```\n", "\n", "With this structure, it will be very simple to actually train the DQN on Cartpole:\n", "\n", "```python\n", "# Create the environment\n", "env = gym.make('CartPole-v1')\n", "\n", "# Create the agent\n", "agent = DQNAgent(env, create_model, other_parameters)\n", "\n", "# Train the agent\n", "returns, losses = agent.train(nb_episodes)\n", "\n", "# Plot the returns\n", "plt.figure(figsize=(10, 6))\n", "plt.plot(returns)\n", "plt.plot(running_mean(returns, 10))\n", "plt.xlabel(\"Episodes\")\n", "plt.ylabel(\"Returns\")\n", "\n", "# Plot the losses\n", "plt.figure(figsize=(10, 6))\n", "plt.plot(losses)\n", "plt.xlabel(\"Episodes\")\n", "plt.ylabel(\"Training loss\")\n", "\n", "plt.show()\n", "\n", "# Test the network\n", "nb_steps = agent.test()\n", "print(\"Number of steps:\", nb_steps)\n", "```\n", "\n", "So you \"just\" have to fill the holes.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "4xwVaGiAif8G" }, "source": [ "### 1 - `__init__()`: Initializing the agent\n", "\n", "In this method, you should first copy the value of the parameters as attributes: learning rate, epsilon, gamma and so on.\n", "\n", "Suggested values: gamma = 0.99, learning_rate = 0.001 \n", "\n", "The second thing to do is to create the trained and target networks (with the same weights) and save them as attributes (the other methods will use them). Do not forget to clear the keras session first, otherwise the RAM will be quickly filled.\n", "\n", "The third thing is to create an instance of the ERM. Use a buffer limit of 5000 transitions (should be passed as a parameter). \n", "\n", "Do not hesitate to add other stuff as you implementing the other methods (e.g. counters)." ] }, { "cell_type": "markdown", "metadata": { "id": "jULjJ-EqkBuU" }, "source": [ "### 2 - `act()`: action selection\n", "\n", "We will use a simple $\\epsilon$-greedy method for the action selection, as in the previous exercises. \n", "\n", "The only difference is that we have to use the trained model to get the greedy action, using `trained_model.predict(X, verbose=0)`. This will return the Q-value of the two actions left and right. Use `argmax()` to return the greedy action (with probability 1 - $\\epsilon$). `env.action_space.sample()` should be used for the exploration (do not use the Q-network in that case, it is slow!).\n", "\n", "$\\epsilon$ will be scheduled with an initial value of 1.0 and an exponential decay rate of 0.0005 after each action. It is always better to keep a little exploration, even if $\\epsilon$ has decayed to 0. Keep a minimal value of 0.05 for epsilon. \n", "\n", "**Q:** Once this has been implemented, run your very slow random agent for 100 episodes to check everything works correctly." ] }, { "cell_type": "markdown", "metadata": { "id": "spNME6LFtWFb" }, "source": [ "### 3 - `train()`: training loop\n", "\n", "This method will be very similar to the Q-learning agent that you implemented previously. Do not hesitate to copy and paste.\n", "\n", "Here is the parts of the DQN algorithm that should be implemented:\n", "\n", "* for $t \\in [0, T_\\text{total}]$:\n", "\n", " * Select an action $a_t$ based on $Q_\\theta(s_t, a)$, observe $s_{t+1}$ and $r_{t+1}$.\n", "\n", " * Store $(s_t, a_t, r_{t+1}, s_{t+1})$ in the experience replay memory.\n", "\n", " * Every $T_\\text{train}$ steps:\n", "\n", " * Sample a minibatch $\\mathcal{D}_s$ randomly from $\\mathcal{D}$.\n", "\n", " * Update the trained network using $\\mathcal{D}_s$.\n", "\n", " * Every $T_\\text{target}$ steps:\n", "\n", " * Update target network: $\\theta' \\leftarrow \\theta$.\n", "\n", "The main difference with Q-learning is that `update()` will be called only every `T_train = 4` steps: the number of updates to the trained network will be 4 times smaller that the number of steps made in the environment. Beware that if the ERM does not have enough transitions yet (less than the batch size), you should not call `update()`.\n", "\n", "Updating the target network (copying the weights of the trained network) should happen every 100 steps. Pass these parameters to the constructor of the agent. \n", "\n", "The batch size can be set to 32." ] }, { "cell_type": "markdown", "metadata": { "id": "nAsS8V4NwAua" }, "source": [ "### 4 - `update()`: training the value network\n", "\n", "Using the provided minibatch, one should implement the following part of the DQN algorithm:\n", "\n", "* For each transition $(s_k, a_k, r_k, s'_k)$ in the minibatch:\n", "\n", " * Compute the target value $t_k = r_k + \\gamma \\, \\max_{a'} Q_{\\theta'}(s'_k, a')$ using the target network.\n", "\n", "* Update the value network $Q_{\\theta}$ on $\\mathcal{D}_s$ to minimize:\n", "\n", " $$\\mathcal{L}(\\theta) = \\mathbb{E}_{\\mathcal{D}_s}[(t_k - Q_\\theta(s_k, a_k))^2]$$\n", "\n", "So we just need to define the targets for each transition in the minibatch, and call `model.fit()` on the trained network to minimize the mse between the current predictions $Q_\\theta(s_k, a_k)$ and the target.\n", "\n", "But we have a problem: the network has two outputs for the actions left and right, but we have only one target for the action that was executed. We cannot compute the mse between a vector with 2 elements and a single value... They must have the same size.\n", "\n", "As we want only the train the output neuron corresponding to the action $a_k$, we are going to:\n", "\n", "1. Use the trained network to predict the Q-value of both actions $[Q_\\theta(s_k, 0), Q_\\theta(s_k, 1)]$.\n", "2. Replace one of the values with the target, for example $[Q_\\theta(s_k, 0), t_k]$ if the second action was chosen.\n", "3. Minimize the mse between $[Q_\\theta(s_k, 0), Q_\\theta(s_k, 1)]$ and $[Q_\\theta(s_k, 0), t_k]$.\n", "\n", "That way, the first output neuron has a squared error of 0, so it won't learn anything. Only the second output neuron will have a non-zero mse and learn.\n", "\n", "There are more efficient ways to do this (using masks), but this will do the trick, the drawback being that we have to make a forward pass on the minibatch before calling `fit()`.\n", "\n", "The rest is pretty much the same as for your Q-learning agent. Do not forget that actions leading to a terminal state should only use the reward as a target, not the complete Bellman target $r + \\gamma \\max Q$.\n", "\n", "*Hint:* as we sample a minibatch of 32 transitions, it is faster to call:\n", "\n", "```python\n", "Q_values = np.array(training_model.predict_on_batch(states))\n", "```\n", "\n", "than:\n", "\n", "```python\n", "Q_values = training_model.predict(states)\n", "```\n", "\n", "for reasons internal to tensorflow. Note that with tf2, you need to cast the result to numpy arrays as eager mode is now the default.\n", "\n", "The method should return the training loss, which is contained in the `History` object returned by `model.fit()`. `model.fit()` should be called for one epoch only, a batch size of 32, and `verbose` set to 0. " ] }, { "cell_type": "markdown", "metadata": { "id": "ym9zpNaK-wxl" }, "source": [ "### 5 - `test()`\n", "\n", "This method should run one episode with epsilon set to 0, without learning. The number of steps should be returned (do not bother discounting with gamma, the goal is to be up for 200 steps). " ] }, { "cell_type": "markdown", "metadata": { "id": "gmEoNa1V_d7l" }, "source": [ "**Q:** Let's go! Run the agent for 150 episodes and observe how fast it manages to keep the pole up for 200 steps. \n", "\n", "Beware that running the same network twice can lead to very different results. In particular, policy collapse (the network was almost perfect, but suddenly crashes and becomes random) can happen. Just be patient. \n", "\n", "You can visualize a test trial using the `GymRecorder`: you just need to set the `env` attribute of your DQN agent to a new env with the render mode `rgb_array_list` and record the frames at the end." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kkUo7qS26PKS" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "iMhXV-ezD8um" }, "source": [ "**Q:** How does the loss evolve? Does it make sense?" ] }, { "cell_type": "markdown", "metadata": { "id": "K1m4JWFF_0Cs" }, "source": [ "## Reward scaling\n", "\n", "**Q:** Do a custom test trial after training (i.e. do not call test(), but copy and adapt its code) and plot the Q-value of the selected action at each time step. Do you think it is a good output for the network? Could it explain why learning is so slow?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 388 }, "id": "MC3dqB8PBkjn", "outputId": "5c417c70-e4d7-4516-b8b7-227cb09c6a46" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "RzxvN0OcCuGd" }, "source": [ "**Q:** Implement **reward scaling** by dividing the received rewards by a fixed factor of 100 when computing the Bellman targets. That way, the final Q-values will be around 1, what may be much easier to learned.\n", "\n", "*Tip:* in order to avoid a huge copy and paste, you can inherit from your DQNAgent and only reimplement the desired function:\n", "\n", "```python\n", "class ScaledDQNAgent (DQNAgent):\n", " def update(self, batch):\n", " # Change the content of this function only\n", "```\n", "\n", "You should reduce a bit the learning rate (e.g. 0.001) as the magnitude of the targets has changed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MQnS0eUkCukm" }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": { "id": "uq75lUygCu2P" }, "source": [ "**Q:** Depending on the time left and your motivation, vary the different parameters to understand their influence: learning rate, target update frequency, training update frequency, epsilon decay, gamma, etc. Change the size of the network. If you find better hyperparameters than what is proposed, please report them for next year!" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "12-DQN-solution.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" }, "vscode": { "interpreter": { "hash": "932956c8e5d2f79d68ff59e849758b6e4ddbf01f7f22c7d8bb3532c38341d908" } } }, "nbformat": 4, "nbformat_minor": 4 }