{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Monte-Carlo control " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "try:\n", " import google.colab\n", " IN_COLAB = True\n", "except:\n", " IN_COLAB = False\n", "\n", "if IN_COLAB:\n", " !pip install -U gymnasium pygame moviepy\n", " !pip install gymnasium[box2d]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "rng = np.random.default_rng()\n", "import matplotlib.pyplot as plt\n", "import os\n", "\n", "import gymnasium as gym\n", "print(\"gym version:\", gym.__version__)\n", "\n", "from moviepy.editor import ImageSequenceClip, ipython_display\n", "\n", "class GymRecorder(object):\n", " \"\"\"\n", " Simple wrapper over moviepy to generate a .gif with the frames of a gym environment.\n", " \n", " The environment must have the render_mode `rgb_array_list`.\n", " \"\"\"\n", " def __init__(self, env):\n", " self.env = env\n", " self._frames = []\n", "\n", " def record(self, frames):\n", " \"To be called at the end of an episode.\"\n", " for frame in frames:\n", " self._frames.append(np.array(frame))\n", "\n", " def make_video(self, filename):\n", " \"Generates the gif video.\"\n", " directory = os.path.dirname(os.path.abspath(filename))\n", " if not os.path.exists(directory):\n", " os.mkdir(directory)\n", " self.clip = ImageSequenceClip(list(self._frames), fps=self.env.metadata[\"render_fps\"])\n", " self.clip.write_gif(filename, fps=self.env.metadata[\"render_fps\"], loop=0)\n", " del self._frames\n", " self._frames = []" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The taxi environment\n", "\n", "In this exercise, we are going to apply **on-policy Monte-Carlo control** on the Taxi environment available in gym:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create the environment in ansi mode, initialize it, and render the first state:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "env = gym.make(\"Taxi-v3\", render_mode='ansi')\n", "state, info = env.reset()\n", "print(state)\n", "print(env.render())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The agent is the black square. It can move up, down, left or right if there is no wall (the pipes and dashes). Its goal is to pick clients at the blue location and drop them off at the purple location. These locations are fixed (R, G, B, Y), but which one is the pick-up location and which one is the drop-off destination changes between each episode.\n", "\n", "**Q:** Re-run the previous cell multiple times to observe the diversity of initial states.\n", "\n", "The following cell prints the action space of the environment: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Action Space:\", env.action_space)\n", "print(\"Number of actions:\", env.action_space.n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are 6 discrete actions: south, north, east, west, pickup, dropoff.\n", " \n", "Let's now look at the observation space (state space):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"State Space:\", env.observation_space)\n", "print(\"Number of states:\", env.observation_space.n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are 500 discrete states. What are they?\n", "\n", "* The taxi can be anywhere in the 5x5 grid, giving 25 different locations.\n", "* The passenger can be at any of the four locations R, G, B, Y or in the taxi: 5 values.\n", "* The destination can be any of the four locations: 4 values.\n", "\n", "This gives indeed 25x5x4 = 500 different combinations.\n", "\n", "The internal representation of a state is a number between 0 and 499. You can use the `encode` and `decode` methods of the environment to relate it to the state variables." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "state = env.unwrapped.encode(2, 1, 1, 0) # (taxi row, taxi column, passenger index, destination index)\n", "print(\"State:\", state)\n", "\n", "state = env.unwrapped.decode(328) \n", "print(\"State:\", list(state))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The reward function is simple:\n", "\n", "* r = 20 when delivering the client at the correct location.\n", "* r = -10 when picking or dropping a client illegally (picking where there is no client, dropping a client somewhere else, etc)\n", "* r = -1 for all other transitions in order to incent the agent to be as fast as possible.\n", "\n", "The actions pickup and dropoff are very dangerous: take them at the wrong time and your return will be very low. The navigation actions are less critical.\n", "\n", "Depending on the initial state, the taxi will need at least 10 steps to deliver the client, so the maximal return you can expect is around 10 (+20 for the success, -1 for all the steps). \n", "\n", "The task is episodic: if you have not delivered the client within 200 steps, the episode stops (no particular reward)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Random agent\n", "\n", "Let's now define a random agent that just samples the action space." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Modify the random agent of last time, so that it accepts the `GymRecorder` that generates the .gif file.\n", "\n", "```python\n", "def train(self, nb_episodes, recorder=None):\n", "```\n", "\n", "The environment should be started in 'rgb_array_list' mode, not 'ansi'. The game looks different but has the same rules.\n", "\n", "```python\n", "env = gym.make(\"Taxi-v3\", render_mode='rgb_array_list')\n", "recorder = GymRecorder(env)\n", "```\n", "\n", "As episodes in Taxi can be quite long, only the last episode should be recorded:\n", "\n", "```python\n", "if recorder is not None and episode == nb_episodes -1:\n", " recorder.record(env.render())\n", "```\n", "\n", "Perform 10 episodes, plot the obtained returns and vidualize the last episode." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** What do you think of the returns obtained by the random agent? Conclude on the difficulty of the task." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## On-policy Monte-Carlo control\n", "\n", "Now let's apply on-policy MC control on the Taxi environment. As a reminder, here the meta-algorithm:\n", "\n", "* **while** True:\n", "\n", " 1. Generate an episode $\\tau = (s_0, a_0, r_1, \\ldots, s_T)$ using the current **stochastic** policy $\\pi$.\n", "\n", " 2. For each state-action pair $(s_t, a_t)$ in the episode, update the estimated Q-value:\n", "\n", " $$\n", " Q(s_t, a_t) = Q(s_t, a_t) + \\alpha \\, (R_t - Q(s_t, a_t))\n", " $$\n", "\n", " 3. For each state $s_t$ in the episode, improve the policy (e.g. $\\epsilon$-greedy):\n", "\n", " $$\n", " \\pi(s_t, a) = \\begin{cases}\n", " 1 - \\epsilon \\; \\text{if} \\; a = a^* \\\\\n", " \\frac{\\epsilon}{|\\mathcal{A(s_t)}|-1} \\; \\text{otherwise.} \\\\\n", " \\end{cases}\n", " $$\n", " \n", "In practice, we will need:\n", "\n", "* a **Q-table** storing the estimated Q-value of each state-action pair: its size will be (500, 6).\n", "\n", "* an $\\epsilon$-greedy action selection to select actions in the current state.\n", "\n", "* an learning mechanism allowing to update the Q-value of all state-action pairs encountered in the episode.\n", "\n", "**Q:** Create a `MonteCarloAgent` class implementing on-policy MC for the Taxi environment. \n", "\n", "Use $\\gamma = 0.9$, $\\epsilon = 0.1$ and $\\alpha=0.01$ (pass these parameters to the constructor of the agent and store them). Train the agent for 20000 episodes (yes, 20000... Start with one episode to debug everything and then launch the simulation. It should take around one minute). Save the return of each episode in a list, as well as the number of steps of the episode, and plot them in the end. \n", "\n", "The environment should be created without rendering (`env = gym.make(\"Taxi-v3\")`, no recorder).\n", "\n", "Implementing the action selection mechanism should not be a problem, it is the same as for bandits. Little trick (not obligatory): you can implement $\\epsilon$-greedy as:\n", "\n", "```python\n", "action = self.Q[state, :].argmax()\n", "if rng.random() < epsilon:\n", " action = self.env.action_space.sample()\n", "```\n", "\n", "This is not exactly $\\epsilon$-greedy, as `env.action_space.sample()` may select the greedy action again. In practice it does not matter, it only changes the meaning of $\\epsilon$, but the action selection stays similar. It is better to rely on `env.action_space.sample()` for the exploration, as some Gym problem work better with a normal distribution for the exploration than with uniform (e.g. continuous problems). \n", "\n", "Do not select the greedy action with `self.Q[state, :].argmax()` but `rng.random.choice(np.where(self.Q[state, :] == self.Q[state, :].max())[0])`: at the beginning of learning, where the Q-values are all 0, you would otherwise always take the first action (south).\n", "\n", "The `update()` method should take a complete episode as argument, using a list of (state, action, reward) transitions. It should be called at the end of an episode only, not after every step.\n", "\n", "A bit tricky is the calculation of the returns for each visited state. The naive approach would look like:\n", "\n", "```python\n", "T = len(episode)\n", "for t in range(T):\n", " state, action, reward = episode[t]\n", " return_state = 0.0\n", " for k in range(t, T): # rewards coming after t\n", " next_state, next_action, next_reward = episode[k]\n", " return_state += gamma**k * reward\n", " self.Q[state, action] += alpha * (return_state - self.Q[state, action])\n", "```\n", "\n", "The double for loop can be computationally expensive for long episodes (complexity T log T). It is much more efficient to iterate **backwards** on the episode, starting from the last transition and iterating until the first one, and using the fact that:\n", "\n", "$$R_{t} = r_{t+1} + \\gamma \\, R_{t+1}$$\n", "\n", "The terminal state $s_T$ has a return of 0 by definition. The last transition $s_{T-1} \\rightarrow s_{T}$ has therefore a return of $R_{T-1} = r_T$. The transition before that has a return of $R_{T-2} = r_{T-1} + \\gamma \\, R_{T-1}$, and so on. You can then compute the returns of each action taken in the episode (and update its Q-value) in **linear time**.\n", "\n", "To iterate backwards over the list of transitions, use the `reversed()` operator:\n", "\n", "```python\n", "l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n", "\n", "for a in reversed(l):\n", " print(a)\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you may observe, the returns have a huge variance due to the exploration, what makes the plot quite ugly and unreadable. The following function allows to smooth the returns using a sliding average over the last $N$ epochs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def running_average(x, N):\n", " kernel = np.ones(N) / N\n", " return np.convolve(x, kernel, mode='same')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Plot the returns and steps, as well as their sliding average. Comment on the influence of exploration. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Extend the `MonteCarloAgent` class with a test method that performs a single episode on the environment **without exploration**, optionally records the episode but does **not** learn. \n", "\n", "```python\n", "class MonteCarloAgentTest (MonteCarloAgent):\n", " \"\"\"\n", " Online Monte-Carlo agent with a test method.\n", " \"\"\"\n", "\n", " def test(self, recorder=None):\n", " # ...\n", "```\n", "\n", "In the test method, backup the previous value of `epsilon` in a temporary variable and reset it at the end of the episode. Have the method return the undiscounted return of the episode, as well as the number of steps until termination.\n", "\n", "\n", "Perform 1000 test episodes without rendering and report the mean return over these 1000 episodes as the final performance of your agent.\n", "\n", "*Tip:* To avoid re-training the agent, simply transfer the Q-table from the previous agent:\n", "\n", "```python\n", "test_agent = MonteCarloAgentTest(env, gamma, epsilon, alpha)\n", "test_agent.Q = agent.Q\n", "\n", "return_episode, nb_steps = test_agent.test()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Q:** Visualize one episode after training. The environment used for training had no render mode, but you can always create a new environment and set it in the agent:\n", "\n", "```python\n", "env = gym.make(\"Taxi-v3\", render_mode=\"human\") # or rgb_array_list\n", "test_agent.env = env\n", "````" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiments\n", "\n", "### Early stopping\n", "\n", "**Q:** Train the agent for the smallest number of episodes where the returns seem to have stabilized (e.g. 2000 episodes). Test the agent. Does it work? Why?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Discount rate\n", "\n", "**Q:** Change the value of the discount factor $\\gamma$. As the task is episodic (maximum 200 steps), try a discount rate of 1. What happens? Conclude." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Learning rate\n", "\n", "**Q:** Vary the learning rate `alpha`. What happens?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploration parameter\n", "\n", "**Q:** Vary the exploration parameter `epsilon` and observe its impact on learning." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exploration scheduling\n", "\n", "Even with a good learning rate (0.01) and a discount factor of 0.9, the exploration parameter as a huge impact on the performance: too low and the agent does not find the optimal policy, too high and the agent is inefficient at the end of learning. \n", "\n", "**Q:** Implement scheduling for epsilon. You can use exponential scheduling as in the bandits exercise:\n", "\n", "$$\\epsilon = \\epsilon \\times (1 - \\epsilon_\\text{decay})$$\n", "\n", "at the end of each episode, with $\\epsilon_\\text{decay}$ being a small decay parameter (`1e-5` or so).\n", "\n", "Find a correct value for $\\epsilon_\\text{decay}$. Do not hesitate to fine-tune alpha at the same time.\n", "\n", "*Tip:* Prepare and visualize the scheduling in a different cell, and use the initial value of $\\epsilon$ and $\\epsilon_\\text{decay}$ that seem to make sense. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.13 ('base')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" }, "vscode": { "interpreter": { "hash": "3d24234067c217f49dc985cbc60012ce72928059d528f330ba9cb23ce737906d" } } }, "nbformat": 4, "nbformat_minor": 4 }