{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Making Complex Decisions\n", "---\n", "\n", "This Jupyter notebook acts as supporting material for topics covered in **Chapter 17 Making Complex Decisions** of the book* Artificial Intelligence: A Modern Approach*. We make use of the implementations in mdp.py module. This notebook also includes a brief summary of the main topics as a review. Let us import everything from the mdp module to get started." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from mdp import *\n", "from notebook import psource, pseudocode, plot_pomdp_utility" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CONTENTS\n", "\n", "* Overview\n", "* MDP\n", "* Grid MDP\n", "* Value Iteration\n", " * Value Iteration Visualization\n", "* Policy Iteration\n", "* POMDPs\n", "* POMDP Value Iteration\n", " - Value Iteration Visualization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OVERVIEW\n", "\n", "Before we start playing with the actual implementations let us review a couple of things about MDPs.\n", "\n", "- A stochastic process has the **Markov property** if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it.\n", "\n", " -- Source: [Wikipedia](https://en.wikipedia.org/wiki/Markov_property)\n", "\n", "Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state.\n", " \n", "\n", "- MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process).\n", "\n", "Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## MDP\n", "\n", "To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states, actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
class MDP:\n",
       "\n",
       "    """A Markov Decision Process, defined by an initial state, transition model,\n",
       "    and reward function. We also keep track of a gamma value, for use by\n",
       "    algorithms. The transition model is represented somewhat differently from\n",
       "    the text. Instead of P(s' | s, a) being a probability number for each\n",
       "    state/state/action triplet, we instead have T(s, a) return a\n",
       "    list of (p, s') pairs. We also keep track of the possible states,\n",
       "    terminal states, and actions for each state. [page 646]"""\n",
       "\n",
       "    def __init__(self, init, actlist, terminals, transitions = {}, reward = None, states=None, gamma=.9):\n",
       "        if not (0 < gamma <= 1):\n",
       "            raise ValueError("An MDP must have 0 < gamma <= 1")\n",
       "\n",
       "        if states:\n",
       "            self.states = states\n",
       "        else:\n",
       "            ## collect states from transitions table\n",
       "            self.states = self.get_states_from_transitions(transitions)\n",
       "            \n",
       "        \n",
       "        self.init = init\n",
       "        \n",
       "        if isinstance(actlist, list):\n",
       "            ## if actlist is a list, all states have the same actions\n",
       "            self.actlist = actlist\n",
       "        elif isinstance(actlist, dict):\n",
       "            ## if actlist is a dict, different actions for each state\n",
       "            self.actlist = actlist\n",
       "        \n",
       "        self.terminals = terminals\n",
       "        self.transitions = transitions\n",
       "        if self.transitions == {}:\n",
       "            print("Warning: Transition table is empty.")\n",
       "        self.gamma = gamma\n",
       "        if reward:\n",
       "            self.reward = reward\n",
       "        else:\n",
       "            self.reward = {s : 0 for s in self.states}\n",
       "        #self.check_consistency()\n",
       "\n",
       "    def R(self, state):\n",
       "        """Return a numeric reward for this state."""\n",
       "        return self.reward[state]\n",
       "\n",
       "    def T(self, state, action):\n",
       "        """Transition model. From a state and an action, return a list\n",
       "        of (probability, result-state) pairs."""\n",
       "        if(self.transitions == {}):\n",
       "            raise ValueError("Transition model is missing")\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       "\n",
       "    def actions(self, state):\n",
       "        """Set of actions that can be performed in this state. By default, a\n",
       "        fixed list of actions, except for terminal states. Override this\n",
       "        method if you need to specialize by state."""\n",
       "        if state in self.terminals:\n",
       "            return [None]\n",
       "        else:\n",
       "            return self.actlist\n",
       "\n",
       "    def get_states_from_transitions(self, transitions):\n",
       "        if isinstance(transitions, dict):\n",
       "            s1 = set(transitions.keys())\n",
       "            s2 = set([tr[1] for actions in transitions.values() \n",
       "                              for effects in actions.values() for tr in effects])\n",
       "            return s1.union(s2)\n",
       "        else:\n",
       "            print('Could not retrieve states from transitions')\n",
       "            return None\n",
       "\n",
       "    def check_consistency(self):\n",
       "        # check that all states in transitions are valid\n",
       "        assert set(self.states) == self.get_states_from_transitions(self.transitions)\n",
       "        # check that init is a valid state\n",
       "        assert self.init in self.states\n",
       "        # check reward for each state\n",
       "        #assert set(self.reward.keys()) == set(self.states)\n",
       "        assert set(self.reward.keys()) == set(self.states)\n",
       "        # check that all terminals are valid states\n",
       "        assert all([t in self.states for t in self.terminals])\n",
       "        # check that probability distributions for all actions sum to 1\n",
       "        for s1, actions in self.transitions.items():\n",
       "            for a in actions.keys():\n",
       "                s = 0\n",
       "                for o in actions[a]:\n",
       "                    s += o[0]\n",
       "                assert abs(s - 1) < 0.001\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(MDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **_ _init_ _** method takes in the following parameters:\n", "\n", "- init: the initial state.\n", "- actlist: List of actions possible in each state.\n", "- terminals: List of terminal states where only possible action is exit\n", "- gamma: Discounting factor. This makes sure that delayed rewards have less value compared to immediate ones.\n", "\n", "**R** method returns the reward for each state by using the self.reward dict.\n", "\n", "**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n", "\n", "**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let us implement the simple MDP in the image below. States A, B have actions X, Y available in them. Their probabilities are shown just above the arrows. We start with using MDP as base class for our CustomMDP. Obviously we need to make a few changes to suit our case. We make use of a transition matrix as our transitions are not very simple.\n", "" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Transition Matrix as nested dict. State -> Actions in state -> List of (Probability, State) tuples\n", "t = {\n", " \"A\": {\n", " \"X\": [(0.3, \"A\"), (0.7, \"B\")],\n", " \"Y\": [(1.0, \"A\")]\n", " },\n", " \"B\": {\n", " \"X\": {(0.8, \"End\"), (0.2, \"B\")},\n", " \"Y\": {(1.0, \"A\")}\n", " },\n", " \"End\": {}\n", "}\n", "\n", "init = \"A\"\n", "\n", "terminals = [\"End\"]\n", "\n", "rewards = {\n", " \"A\": 5,\n", " \"B\": -10,\n", " \"End\": 100\n", "}" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "class CustomMDP(MDP):\n", " def __init__(self, init, terminals, transition_matrix, reward = None, gamma=.9):\n", " # All possible actions.\n", " actlist = []\n", " for state in transition_matrix.keys():\n", " actlist.extend(transition_matrix[state])\n", " actlist = list(set(actlist))\n", " MDP.__init__(self, init, actlist, terminals, transition_matrix, reward, gamma=gamma)\n", "\n", " def T(self, state, action):\n", " if action is None:\n", " return [(0.0, state)]\n", " else: \n", " return self.t[state][action]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally we instantize the class with the parameters for our MDP in the picture." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": true }, "outputs": [], "source": [ "our_mdp = CustomMDP(init, terminals, t, rewards, gamma=.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With this we have successfully represented our MDP. Later we will look at ways to solve this MDP." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## GRID MDP\n", "\n", "Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in **Fig 17.1** of the AIMA Book. We assume for now that the environment is _fully observable_, so that the agent always knows where it is. The code should be easy to understand if you have gone through the CustomMDP example." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
class GridMDP(MDP):\n",
       "\n",
       "    """A two-dimensional grid MDP, as in [Figure 17.1]. All you have to do is\n",
       "    specify the grid as a list of lists of rewards; use None for an obstacle\n",
       "    (unreachable state). Also, you should specify the terminal states.\n",
       "    An action is an (x, y) unit vector; e.g. (1, 0) means move east."""\n",
       "\n",
       "    def __init__(self, grid, terminals, init=(0, 0), gamma=.9):\n",
       "        grid.reverse()  # because we want row 0 on bottom, not on top\n",
       "        reward = {}\n",
       "        states = set()\n",
       "        self.rows = len(grid)\n",
       "        self.cols = len(grid[0])\n",
       "        self.grid = grid\n",
       "        for x in range(self.cols):\n",
       "            for y in range(self.rows):\n",
       "                if grid[y][x] is not None:\n",
       "                    states.add((x, y))\n",
       "                    reward[(x, y)] = grid[y][x]\n",
       "        self.states = states\n",
       "        actlist = orientations\n",
       "        transitions = {}\n",
       "        for s in states:\n",
       "            transitions[s] = {}\n",
       "            for a in actlist:\n",
       "                transitions[s][a] = self.calculate_T(s, a)\n",
       "        MDP.__init__(self, init, actlist=actlist,\n",
       "                     terminals=terminals, transitions = transitions, \n",
       "                     reward = reward, states = states, gamma=gamma)\n",
       "\n",
       "    def calculate_T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return [(0.8, self.go(state, action)),\n",
       "                    (0.1, self.go(state, turn_right(action))),\n",
       "                    (0.1, self.go(state, turn_left(action)))]\n",
       "    \n",
       "    def T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       " \n",
       "    def go(self, state, direction):\n",
       "        """Return the state that results from going in this direction."""\n",
       "        state1 = vector_add(state, direction)\n",
       "        return state1 if state1 in self.states else state\n",
       "\n",
       "    def to_grid(self, mapping):\n",
       "        """Convert a mapping from (x, y) to v into a [[..., v, ...]] grid."""\n",
       "        return list(reversed([[mapping.get((x, y), None)\n",
       "                               for x in range(self.cols)]\n",
       "                              for y in range(self.rows)]))\n",
       "\n",
       "    def to_arrows(self, policy):\n",
       "        chars = {\n",
       "            (1, 0): '>', (0, 1): '^', (-1, 0): '<', (0, -1): 'v', None: '.'}\n",
       "        return self.to_grid({s: chars[a] for (s, a) in policy.items()})\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The **_ _init_ _** method takes **grid** as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states.\n", "\n", "**go** method returns the state by going in particular direction by using vector_add.\n", "\n", "**T** method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s.\n", "\n", "**actions** method returns list of actions possible in each state. By default it returns all actions for states other than terminal states.\n", "\n", "**to_arrows** are used for representing the policy in a grid like format." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create a GridMDP like the one in **Fig 17.1** as follows: \n", "\n", " GridMDP([[-0.04, -0.04, -0.04, +1],\n", " [-0.04, None, -0.04, -1],\n", " [-0.04, -0.04, -0.04, -0.04]],\n", " terminals=[(3, 2), (3, 1)])\n", " \n", "In fact the **sequential_decision_environment** in mdp module has been instantized using the exact same code." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sequential_decision_environment" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# VALUE ITERATION\n", "\n", "Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better.\n", "\n", "We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy $\\pi$. The value or the utility of a state is given by\n", "\n", "$$U(s)=R(s)+\\gamma\\max_{a\\epsilon A(s)}\\sum_{s'} P(s'\\ |\\ s,a)U(s')$$\n", "\n", "This is called the Bellman equation. The algorithm Value Iteration (**Fig. 17.4** in the book) relies on finding solutions of this Equation. The intuition Value Iteration works is because values propagate through the state space by means of local updates. This point will we more clear after we encounter the visualisation. For more information you can refer to **Section 17.2** of the book. \n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def value_iteration(mdp, epsilon=0.001):\n",
       "    """Solving an MDP by value iteration. [Figure 17.4]"""\n",
       "    U1 = {s: 0 for s in mdp.states}\n",
       "    R, T, gamma = mdp.R, mdp.T, mdp.gamma\n",
       "    while True:\n",
       "        U = U1.copy()\n",
       "        delta = 0\n",
       "        for s in mdp.states:\n",
       "            U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])\n",
       "                                        for a in mdp.actions(s)])\n",
       "            delta = max(delta, abs(U1[s] - U[s]))\n",
       "        if delta < epsilon * (1 - gamma) / gamma:\n",
       "            return U\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(value_iteration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It takes as inputs two parameters, an MDP to solve and epsilon, the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities.
Value Iteration starts with arbitrary initial values for the utilities, calculates the right side of the Bellman equation and plugs it into the left hand side, thereby updating the utility of each state from the utilities of its neighbors. \n", "This is repeated until equilibrium is reached. \n", "It works on the principle of _Dynamic Programming_ - using precomputed information to simplify the subsequent computation. \n", "If $U_i(s)$ is the utility value for state $s$ at the $i$ th iteration, the iteration step, called Bellman update, looks like this:\n", "\n", "$$ U_{i+1}(s) \\leftarrow R(s) + \\gamma \\max_{a \\epsilon A(s)} \\sum_{s'} P(s'\\ |\\ s,a)U_{i}(s') $$\n", "\n", "As you might have noticed, `value_iteration` has an infinite loop. How do we decide when to stop iterating? \n", "The concept of _contraction_ successfully explains the convergence of value iteration. \n", "Refer to **Section 17.2.3** of the book for a detailed explanation. \n", "In the algorithm, we calculate a value $delta$ that measures the difference in the utilities of the current time step and the previous time step. \n", "\n", "$$\\delta = \\max{(\\delta, \\begin{vmatrix}U_{i + 1}(s) - U_i(s)\\end{vmatrix})}$$\n", "\n", "This value of delta decreases as the values of $U_i$ converge.\n", "We terminate the algorithm if the $\\delta$ value is less than a threshold value determined by the hyperparameter _epsilon_.\n", "\n", "$$\\delta \\lt \\epsilon \\frac{(1 - \\gamma)}{\\gamma}$$\n", "\n", "To summarize, the Bellman update is a _contraction_ by a factor of $gamma$ on the space of utility vectors. \n", "Hence, from the properties of contractions in general, it follows that `value_iteration` always converges to a unique solution of the Bellman equations whenever $gamma$ is less than 1.\n", "We then terminate the algorithm when a reasonable approximation is achieved.\n", "In practice, it often occurs that the policy $pi$ becomes optimal long before the utility function converges. For the given 4 x 3 environment with $gamma = 0.9$, the policy $pi$ is optimal when $i = 4$ (at the 4th iteration), even though the maximum error in the utility function is stil 0.46. This can be clarified from **figure 17.6** in the book. Hence, to increase computational efficiency, we often use another method to solve MDPs called Policy Iteration which we will see in the later part of this notebook. \n", "
For now, let us solve the **sequential_decision_environment** GridMDP using `value_iteration`." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{(0, 0): 0.2962883154554812,\n", " (0, 1): 0.3984432178350045,\n", " (0, 2): 0.5093943765842497,\n", " (1, 0): 0.25386699846479516,\n", " (1, 2): 0.649585681261095,\n", " (2, 0): 0.3447542300124158,\n", " (2, 1): 0.48644001739269643,\n", " (2, 2): 0.7953620878466678,\n", " (3, 0): 0.12987274656746342,\n", " (3, 1): -1.0,\n", " (3, 2): 1.0}" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "value_iteration(sequential_decision_environment)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The pseudocode for the algorithm:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n", "      rewards _R_(_s_), discount _γ_ \n", "   _ε_, the maximum error allowed in the utility of any state \n", " __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n", "        _δ_, the maximum change in the utility of any state in an iteration \n", "\n", " __repeat__ \n", "   _U_ ← _U′_; _δ_ ← 0 \n", "   __for each__ state _s_ in _S_ __do__ \n", "     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max_a_ ∈ _A_(_s_) Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n", " __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n", " __return__ _U_ \n", "\n", "---\n", "__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)." ], "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode(\"Value-Iteration\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AIMA3e\n", "__function__ VALUE-ITERATION(_mdp_, _ε_) __returns__ a utility function \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n", "      rewards _R_(_s_), discount _γ_ \n", "   _ε_, the maximum error allowed in the utility of any state \n", " __local variables__: _U_, _U′_, vectors of utilities for states in _S_, initially zero \n", "        _δ_, the maximum change in the utility of any state in an iteration \n", "\n", " __repeat__ \n", "   _U_ ← _U′_; _δ_ ← 0 \n", "   __for each__ state _s_ in _S_ __do__ \n", "     _U′_\\[_s_\\] ← _R_(_s_) + _γ_ max_a_ ∈ _A_(_s_) Σ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "     __if__ | _U′_\\[_s_\\] − _U_\\[_s_\\] | > _δ_ __then__ _δ_ ← | _U′_\\[_s_\\] − _U_\\[_s_\\] | \n", " __until__ _δ_ < _ε_(1 − _γ_)/_γ_ \n", " __return__ _U_ \n", "\n", "---\n", "__Figure ??__ The value iteration algorithm for calculating utilities of states. The termination condition is from Equation (__??__)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## VALUE ITERATION VISUALIZATION\n", "\n", "To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def value_iteration_instru(mdp, iterations=20):\n", " U_over_time = []\n", " U1 = {s: 0 for s in mdp.states}\n", " R, T, gamma = mdp.R, mdp.T, mdp.gamma\n", " for _ in range(iterations):\n", " U = U1.copy()\n", " for s in mdp.states:\n", " U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)])\n", " for a in mdp.actions(s)])\n", " U_over_time.append(U)\n", " return U_over_time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we define a function to create the visualisation from the utilities returned by **value_iteration_instru**. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit [ipywidgets.readthedocs.io](http://ipywidgets.readthedocs.io)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": true }, "outputs": [], "source": [ "columns = 4\n", "rows = 3\n", "U_over_time = value_iteration_instru(sequential_decision_environment)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "%matplotlib inline\n", "from notebook import make_plot_grid_step_function\n", "\n", "plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAATcAAADuCAYAAABcZEBhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAADYxJREFUeJzt211oW2eex/Hf2Xpb0onWrVkm1otL\nW2SmrNaVtzS2K8jCFhJPXsbtRWcTX4zbmUBINkMYw5jmYrYwhNJuMWTjaTCYDSW5cQK9iEOcpDad\nLAREVtBEF+OwoDEyWEdxirvjelw36cScubCi1PWLvK0lnfnP9wMGHz2P4dEf8fWRnDie5wkArPmb\nah8AAMqBuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMKnm/7N5bk78dwagjDYHnGofwf88\nb11D4s4NgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnE\nDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQNgEnEDYBJxA2AScQN\ngEnEDYBJxA2AScQNgEm+jZvneerpOaJ4PKq2tueVTt9Ycd/Nm5+otbVJ8XhUPT1H5HnekvUTJ3oV\nCDianp6uxLErhvmUxoxW9zNJ35f0j6use5KOSIpKel7S1yd3WlJj4et0Gc/4Xfk2biMjlzU+nlE6\nnVFf34C6uw+tuK+7+5D6+gaUTmc0Pp7R6OiV4louN6mrV0fV0PBUpY5dMcynNGa0ujckXVlj/bKk\nTOFrQNKDyf2fpF9L+h9JqcL3fyjbKb8b38ZteHhInZ1dchxHLS1tmpmZ0dTU7SV7pqZua3Z2Vq2t\nL8lxHHV2dunixfPF9aNHu3Xs2HtyHKfSxy875lMaM1rdP0uqW2N9SFKXJEdSm6QZSbclfSRpe+Fn\nnyx8v1Ykq8m3ccvnXYXDDcXrcDiifN5dYU+keB0KPdwzPHxBoVBYTU3xyhy4wphPaczo23MlNXzt\nOlJ4bLXH/aim2gdYzTc/95C07Lfnanvm5+fV2/u2zp8fKdv5qo35lMaMvr3lU1m8i1vtcT/y1Z3b\nwMBJJRLNSiSaFQyG5LqTxTXXzSkYDC3ZHw5H5Lq54nU+v7gnmx3XxERWiURcsdjTct2ctm17QXfu\nTFXsuZQD8ymNGW2MiKTJr13nJIXWeNyPfBW3AwcOK5lMK5lMa8+eVzU4eEae5ymVuq7a2lrV1weX\n7K+vDyoQCCiVui7P8zQ4eEa7d7+iWKxJ2eynGhub0NjYhMLhiK5du6EtW+qr9Mw2BvMpjRltjA5J\nZ7R4p3ZdUq2koKR2SSNa/CPCHwrft1fpjKX49m1pe/sujYxcUjwe1aZNj6u//4PiWiLRrGQyLUk6\nfrxfBw++obt3v9T27Tu1Y8fOah25ophPacxodZ2S/lvStBbvxn4t6U+FtYOSdkm6pMV/CvK4pAeT\nq5P075K2Fq7f0tp/mKgmZ6XPHFYzN7fiW24AG2RzwK+fYPmI561rSL56WwoAG4W4ATCJuAEwibgB\nMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEw\nibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMIm4ATCJuAEwibgBMKmm2gew\nZPP3vGofwffmvnCqfQRfc8RrqJT1Tog7NwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3\nACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcA\nJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAm+TZunuepp+eI4vGo2tqeVzp9Y8V9N29+\notbWJsXjUfX0HJHneUvWT5zoVSDgaHp6uhLHrpgrV67oB889p2hjo959991l6/fu3dPeffsUbWxU\na1ubJiYmimvvvPOOoo2N+sFzz+mjjz6q4Kkri9dQKf8r6SVJj0nqXWNfVlKrpEZJeyV9VXj8XuE6\nWlifKNdBvxXfxm1k5LLGxzNKpzPq6xtQd/ehFfd1dx9SX9+A0umMxsczGh29UlzL5SZ19eqoGhqe\nqtSxK2JhYUGHf/5zXb50SbfGxjR49qxu3bq1ZM+pU6f05BNP6PeZjLp/8Qu9efSoJOnWrVs6e+6c\nxn73O125fFn/dviwFhYWqvE0yo7XUCl1kvok/bLEvjcldUvKSHpS0qnC46cK178vrL9ZnmN+S76N\n2/DwkDo7u+Q4jlpa2jQzM6OpqdtL9kxN3dbs7KxaW1+S4zjq7OzSxYvni+tHj3br2LH35DhOpY9f\nVqlUStFoVM8++6weffRR7du7V0NDQ0v2DF24oNdff12S9Nprr+njjz+W53kaGhrSvr179dhjj+mZ\nZ55RNBpVKpWqxtMoO15DpXxf0lZJf7vGHk/SbyW9Vrh+XdKD+QwVrlVY/7iw3x98G7d83lU43FC8\nDocjyufdFfZEiteh0MM9w8MXFAqF1dQUr8yBK8h1XTVEHj7vSCQi13WX72lYnF9NTY1qa2v12Wef\nLXlckiLh8LKftYLX0Eb4TNITkmoK1xFJD2boSnow3xpJtYX9/lBTekt1fPNzD0nLfnuutmd+fl69\nvW/r/PmRsp2vmr7LbNbzs1bwGtoIK92JOetYqz5f3bkNDJxUItGsRKJZwWBIrjtZXHPdnILB0JL9\n4XBErpsrXufzi3uy2XFNTGSVSMQViz0t181p27YXdOfOVMWeSzlFIhFN5h4+71wup1AotHzP5OL8\n7t+/r88//1x1dXVLHpeknOsu+9m/ZLyGSjkpqbnwlV/H/r+XNCPpfuE6J+nBDCOSHsz3vqTPtfg5\nnj/4Km4HDhxWMplWMpnWnj2vanDwjDzPUyp1XbW1taqvDy7ZX18fVCAQUCp1XZ7naXDwjHbvfkWx\nWJOy2U81NjahsbEJhcMRXbt2Q1u21FfpmW2srVu3KpPJKJvN6quvvtLZc+fU0dGxZE/Hj36k06dP\nS5I+/PBDvfzyy3IcRx0dHTp77pzu3bunbDarTCajlpaWajyNsuA1VMphSenC13p+qTmS/kXSh4Xr\n05JeKXzfUbhWYf1l+enOzbdvS9vbd2lk5JLi8ag2bXpc/f0fFNcSiWYlk2lJ0vHj/Tp48A3dvful\ntm/fqR07dlbryBVTU1Oj93/zG7X/8IdaWFjQz376U8ViMb311lt68cUX1dHRof379+snXV2KNjaq\nrq5OZwcHJUmxWEz/+uMf6x9iMdXU1Ojk++/rkUceqfIzKg9eQ6VMSXpR0qwW73P+U9ItSX8naZek\n/9JiAP9D0j5Jv5L0T5L2F35+v6SfaPGfgtRJOlvBs5fmrPSZw2rm5nz0pxAf2vw9xlPK3Bf++c3u\nR4FAtU/gf563vttDX70tBYCNQtwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwA\nmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYRNwAmETcAJhE3ACY\nRNwAmETcAJhE3ACYRNwAmETcAJhE3ACYVFPtA1gy94VT7SPgL9wf/1jtE9jBnRsAk4gbAJOIGwCT\niBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOI\nGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gb\nAJN8GzfP89TTc0TxeFRtbc8rnb6x4r6bNz9Ra2uT4vGoenqOyPO8JesnTvQqEHA0PT1diWNXDPMp\njRmtzfp8fBu3kZHLGh/PKJ3OqK9vQN3dh1bc1919SH19A0qnMxofz2h09EpxLZeb1NWro2poeKpS\nx64Y5lMaM1qb9fn4Nm7Dw0Pq7OyS4zhqaWnTzMyMpqZuL9kzNXVbs7Ozam19SY7jqLOzSxcvni+u\nHz3arWPH3pPjOJU+ftkxn9KY0dqsz8e3ccvnXYXDDcXrcDiifN5dYU+keB0KPdwzPHxBoVBYTU3x\nyhy4wphPacxobdbnU1PtA6zmm+/rJS377bDanvn5efX2vq3z50fKdr5qYz6lMaO1WZ+Pr+7cBgZO\nKpFoViLRrGAwJNedLK65bk7BYGjJ/nA4ItfNFa/z+cU92ey4JiaySiTiisWeluvmtG3bC7pzZ6pi\nz6UcmE9pzGhtf03z8VXcDhw4rGQyrWQyrT17XtXg4Bl5nqdU6rpqa2tVXx9csr++PqhAIKBU6ro8\nz9Pg4Bnt3v2KYrEmZbOfamxsQmNjEwqHI7p27Ya2bKmv0jPbGMynNGa0tr+m+fj2bWl7+y6NjFxS\nPB7Vpk2Pq7//g+JaItGsZDItSTp+vF8HD76hu3e/1PbtO7Vjx85qHbmimE9pzGht1ufjrPSeejVz\nc1r/ZgAog82bta4/zfrqbSkAbBTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTi\nBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIG\nwCTiBsAk4gbAJOIGwCTiBsAk4gbAJOIGwCTH87xqnwEANhx3bgBMIm4ATCJuAEwibgBMIm4ATCJu\nAEwibgBMIm4ATCJuAEwibgBM+jPdN0cNjYpeKAAAAABJRU5ErkJggg==\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "The installed widget Javascript is the wrong version. It must satisfy the semver range ~2.1.4.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "77e9849e074841e49d8b0ebc8191507c" } }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import ipywidgets as widgets\n", "from IPython.display import display\n", "from notebook import make_visualize\n", "\n", "iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0)\n", "w=widgets.interactive(plot_grid_step,iteration=iteration_slider)\n", "display(w)\n", "\n", "visualize_callback = make_visualize(iteration_slider)\n", "\n", "visualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\n", "time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n", "a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\n", "display(a)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Move the slider above to observe how the utility changes across iterations. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The **Visualize Button** will automatically animate the slider for you. The **Extra Delay Box** allows you to set time delay in seconds upto one second for each time step. There is also an interactive editor for grid-world problems `grid_mdp.py` in the gui folder for you to play around with." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# POLICY ITERATION\n", "\n", "We have already seen that value iteration converges to the optimal policy long before it accurately estimates the utility function. \n", "If one action is clearly better than all the others, then the exact magnitude of the utilities in the states involved need not be precise. \n", "The policy iteration algorithm works on this insight. \n", "The algorithm executes two fundamental steps:\n", "* **Policy evaluation**: Given a policy _πᵢ_, calculate _Uᵢ = U(πᵢ)_, the utility of each state if _πᵢ_ were to be executed.\n", "* **Policy improvement**: Calculate a new policy _πᵢ₊₁_ using one-step look-ahead based on the utility values calculated.\n", "\n", "The algorithm terminates when the policy improvement step yields no change in the utilities. \n", "Refer to **Figure 17.6** in the book to see how this is an improvement over value iteration.\n", "We now have a simplified version of the Bellman equation\n", "\n", "$$U_i(s) = R(s) + \\gamma \\sum_{s'}P(s'\\ |\\ s, \\pi_i(s))U_i(s')$$\n", "\n", "An important observation in this equation is that this equation doesn't have the `max` operator, which makes it linear.\n", "For _n_ states, we have _n_ linear equations with _n_ unknowns, which can be solved exactly in time _**O(n³)**_.\n", "For more implementational details, have a look at **Section 17.3**.\n", "Let us now look at how the expected utility is found and how `policy_iteration` is implemented." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def expected_utility(a, s, U, mdp):\n",
       "    """The expected utility of doing a in state s, according to the MDP and U."""\n",
       "    return sum([p * U[s1] for (p, s1) in mdp.T(s, a)])\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(expected_utility)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def policy_iteration(mdp):\n",
       "    """Solve an MDP by policy iteration [Figure 17.7]"""\n",
       "    U = {s: 0 for s in mdp.states}\n",
       "    pi = {s: random.choice(mdp.actions(s)) for s in mdp.states}\n",
       "    while True:\n",
       "        U = policy_evaluation(pi, U, mdp)\n",
       "        unchanged = True\n",
       "        for s in mdp.states:\n",
       "            a = argmax(mdp.actions(s), key=lambda a: expected_utility(a, s, U, mdp))\n",
       "            if a != pi[s]:\n",
       "                pi[s] = a\n",
       "                unchanged = False\n",
       "        if unchanged:\n",
       "            return pi\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(policy_iteration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
Fortunately, it is not necessary to do _exact_ policy evaluation. \n", "The utilities can instead be reasonably approximated by performing some number of simplified value iteration steps.\n", "The simplified Bellman update equation for the process is\n", "\n", "$$U_{i+1}(s) \\leftarrow R(s) + \\gamma\\sum_{s'}P(s'\\ |\\ s,\\pi_i(s))U_{i}(s')$$\n", "\n", "and this is repeated _k_ times to produce the next utility estimate. This is called _modified policy iteration_." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def policy_evaluation(pi, U, mdp, k=20):\n",
       "    """Return an updated utility mapping U from each state in the MDP to its\n",
       "    utility, using an approximation (modified policy iteration)."""\n",
       "    R, T, gamma = mdp.R, mdp.T, mdp.gamma\n",
       "    for i in range(k):\n",
       "        for s in mdp.states:\n",
       "            U[s] = R(s) + gamma * sum([p * U[s1] for (p, s1) in T(s, pi[s])])\n",
       "    return U\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(policy_evaluation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us now solve **`sequential_decision_environment`** using `policy_iteration`." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{(0, 0): (0, 1),\n", " (0, 1): (0, 1),\n", " (0, 2): (1, 0),\n", " (1, 0): (1, 0),\n", " (1, 2): (1, 0),\n", " (2, 0): (0, 1),\n", " (2, 1): (0, 1),\n", " (2, 2): (1, 0),\n", " (3, 0): (-1, 0),\n", " (3, 1): None,\n", " (3, 2): None}" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "policy_iteration(sequential_decision_environment)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n", " __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n", "        _π_, a policy vector indexed by state, initially random \n", "\n", " __repeat__ \n", "   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n", "   _unchanged?_ ← true \n", "   __for each__ state _s_ __in__ _S_ __do__ \n", "     __if__ max_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ_s′_ _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n", "       _π_\\[_s_\\] ← argmax_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "       _unchanged?_ ← false \n", " __until__ _unchanged?_ \n", " __return__ _π_ \n", "\n", "---\n", "__Figure ??__ The policy iteration algorithm for calculating an optimal policy." ], "text/plain": [ "" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('Policy-Iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### AIMA3e\n", "__function__ POLICY-ITERATION(_mdp_) __returns__ a policy \n", " __inputs__: _mdp_, an MDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_) \n", " __local variables__: _U_, a vector of utilities for states in _S_, initially zero \n", "        _π_, a policy vector indexed by state, initially random \n", "\n", " __repeat__ \n", "   _U_ ← POLICY\\-EVALUATION(_π_, _U_, _mdp_) \n", "   _unchanged?_ ← true \n", "   __for each__ state _s_ __in__ _S_ __do__ \n", "     __if__ max_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] > Σ_s′_ _P_(_s′_ | _s_, _π_\\[_s_\\]) _U_\\[_s′_\\] __then do__ \n", "       _π_\\[_s_\\] ← argmax_a_ ∈ _A_(_s_) Σ_s′_ _P_(_s′_ | _s_, _a_) _U_\\[_s′_\\] \n", "       _unchanged?_ ← false \n", " __until__ _unchanged?_ \n", " __return__ _π_ \n", "\n", "---\n", "__Figure ??__ The policy iteration algorithm for calculating an optimal policy." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## Sequential Decision Problems\n", "\n", "Now that we have the tools required to solve MDPs, let us see how Sequential Decision Problems can be solved step by step and how a few built-in tools in the GridMDP class help us better analyse the problem at hand. \n", "As always, we will work with the grid world from **Figure 17.1** from the book.\n", "![title](images/grid_mdp.jpg)\n", "
This is the environment for our agent.\n", "We assume for now that the environment is _fully observable_, so that the agent always knows where it is.\n", "We also assume that the transitions are **Markovian**, that is, the probability of reaching state $s'$ from state $s$ depends only on $s$ and not on the history of earlier states.\n", "Almost all stochastic decision problems can be reframed as a Markov Decision Process just by tweaking the definition of a _state_ for that particular problem.\n", "
\n", "However, the actions of our agent in this environment are unreliable. In other words, the motion of our agent is stochastic. \n", "

\n", "More specifically, the agent may - \n", "* move correctly in the intended direction with a probability of _0.8_, \n", "* move $90^\\circ$ to the right of the intended direction with a probability 0.1\n", "* move $90^\\circ$ to the left of the intended direction with a probability 0.1\n", "

\n", "The agent stays put if it bumps into a wall.\n", "![title](images/grid_mdp_agent.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These properties of the agent are called the transition properties and are hardcoded into the GridMDP class as you can see below." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def T(self, state, action):\n",
       "        if action is None:\n",
       "            return [(0.0, state)]\n",
       "        else:\n",
       "            return self.transitions[state][action]\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.T)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To completely define our task environment, we need to specify the utility function for the agent. \n", "This is the function that gives the agent a rough estimate of how good being in a particular state is, or how much _reward_ an agent receives by being in that state.\n", "The agent then tries to maximize the reward it gets.\n", "As the decision problem is sequential, the utility function will depend on a sequence of states rather than on a single state.\n", "For now, we simply stipulate that in each state $s$, the agent receives a finite reward $R(s)$.\n", "\n", "For any given state, the actions the agent can take are encoded as given below:\n", "- Move Up: (0, 1)\n", "- Move Down: (0, -1)\n", "- Move Left: (-1, 0)\n", "- Move Right: (1, 0)\n", "- Do nothing: `None`\n", "\n", "We now wonder what a valid solution to the problem might look like. \n", "We cannot have fixed action sequences as the environment is stochastic and we can eventually end up in an undesirable state.\n", "Therefore, a solution must specify what the agent shoulddo for _any_ state the agent might reach.\n", "
\n", "Such a solution is known as a **policy** and is usually denoted by $\\pi$.\n", "
\n", "The **optimal policy** is the policy that yields the highest expected utility an is usually denoted by $\\pi^*$.\n", "
\n", "The `GridMDP` class has a useful method `to_arrows` that outputs a grid showing the direction the agent should move, given a policy.\n", "We will use this later to better understand the properties of the environment." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def to_arrows(self, policy):\n",
       "        chars = {\n",
       "            (1, 0): '>', (0, 1): '^', (-1, 0): '<', (0, -1): 'v', None: '.'}\n",
       "        return self.to_grid({s: chars[a] for (s, a) in policy.items()})\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.to_arrows)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This method directly encodes the actions that the agent can take (described above) to characters representing arrows and shows it in a grid format for human visalization purposes. \n", "It converts the received policy from a `dictionary` to a grid using the `to_grid` method." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
    def to_grid(self, mapping):\n",
       "        """Convert a mapping from (x, y) to v into a [[..., v, ...]] grid."""\n",
       "        return list(reversed([[mapping.get((x, y), None)\n",
       "                               for x in range(self.cols)]\n",
       "                              for y in range(self.rows)]))\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(GridMDP.to_grid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have all the tools required and a good understanding of the agent and the environment, we consider some cases and see how the agent should behave for each case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 1\n", "---\n", "R(s) = -0.04 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Note that this environment is also initialized in mdp.py by default\n", "sequential_decision_environment = GridMDP([[-0.04, -0.04, -0.04, +1],\n", " [-0.04, None, -0.04, -1],\n", " [-0.04, -0.04, -0.04, -0.04]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use the `best_policy` function to find the best policy for this environment.\n", "But, as you can see, `best_policy` requires a utility function as well.\n", "We already know that the utility function can be found by `value_iteration`.\n", "Hence, our best policy is:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true }, "outputs": [], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now use the `to_arrows` method to see how our agent should pick its actions in the environment." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None ^ .\n", "^ > ^ <\n" ] } ], "source": [ "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "
\n", "![title](images/-0.04.jpg)\n", "
\n", "Notice that, because the cost of taking a step is fairly small compared with the penalty for ending up in `(4, 2)` by accident, the optimal policy is conservative. \n", "In state `(3, 1)` it recommends taking the long way round, rather than taking the shorter way and risking getting a large negative reward of -1 in `(4, 2)`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 2\n", "---\n", "R(s) = -0.4 in all states except in terminal states" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[-0.4, -0.4, -0.4, +1],\n", " [-0.4, None, -0.4, -1],\n", " [-0.4, -0.4, -0.4, -0.4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None ^ .\n", "^ > ^ <\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "![title](images/-0.4.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the reward for each state is now more negative, life is certainly more unpleasant.\n", "The agent takes the shortest route to the +1 state and is willing to risk falling into the -1 state by accident." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 3\n", "---\n", "R(s) = -4 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[-4, -4, -4, +1],\n", " [-4, None, -4, -1],\n", " [-4, -4, -4, -4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > > .\n", "^ None > .\n", "> > > ^\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is exactly the output we expected\n", "![title](images/-4.jpg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The living reward for each state is now lower than the least rewarding terminal. Life is so _painful_ that the agent heads for the nearest exit as even the worst exit is less painful than any living state." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Case 4\n", "---\n", "R(s) = 4 in all states except terminal states" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sequential_decision_environment = GridMDP([[4, 4, 4, +1],\n", " [4, None, 4, -1],\n", " [4, 4, 4, 4]],\n", " terminals=[(3, 2), (3, 1)])" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "> > < .\n", "> None < .\n", "> > > v\n" ] } ], "source": [ "pi = best_policy(sequential_decision_environment, value_iteration(sequential_decision_environment, .001))\n", "from utils import print_table\n", "print_table(sequential_decision_environment.to_arrows(pi))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this case, the output we expect is\n", "![title](images/4.jpg)\n", "
\n", "As life is positively enjoyable and the agent avoids _both_ exits.\n", "Even though the output we get is not exactly what we want, it is definitely not wrong.\n", "The scenario here requires the agent to anything but reach a terminal state, as this is the only way the agent can maximize its reward (total reward tends to infinity), and the program does just that.\n", "
\n", "Currently, the GridMDP class doesn't support an explicit marker for a \"do whatever you like\" action or a \"don't care\" condition.\n", "You can however, extend the class to do so.\n", "
\n", "For in-depth knowledge about sequential decision problems, refer **Section 17.1** in the AIMA book." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## POMDP\n", "---\n", "Partially Observable Markov Decision Problems\n", "\n", "In retrospect, a Markov decision process or MDP is defined as:\n", "- a sequential decision problem for a fully observable, stochastic environment with a Markovian transition model and additive rewards.\n", "\n", "An MDP consists of a set of states (with an initial state $s_0$); a set $A(s)$ of actions\n", "in each state; a transition model $P(s' | s, a)$; and a reward function $R(s)$.\n", "\n", "The MDP seeks to make sequential decisions to occupy states so as to maximise some combination of the reward function $R(s)$.\n", "\n", "The characteristic problem of the MDP is hence to identify the optimal policy function $\\pi^*(s)$ that provides the _utility-maximising_ action $a$ to be taken when the current state is $s$.\n", "\n", "### Belief vector\n", "\n", "**Note**: The book refers to the _belief vector_ as the _belief state_. We use the latter terminology here to retain our ability to refer to the belief vector as a _probability distribution over states_.\n", "\n", "The solution of an MDP is subject to certain properties of the problem which are assumed and justified in [Section 17.1]. One critical assumption is that the agent is **fully aware of its current state at all times**.\n", "\n", "A tedious (but rewarding, as we will see) way of expressing this is in terms of the **belief vector** $b$ of the agent. The belief vector is a function mapping states to probabilities or certainties of being in those states.\n", "\n", "Consider an agent that is fully aware that it is in state $s_i$ in the statespace $(s_1, s_2, ... s_n)$ at the current time.\n", "\n", "Its belief vector is the vector $(b(s_1), b(s_2), ... b(s_n))$ given by the function $b(s)$:\n", "\\begin{align*}\n", "b(s) &= 0 \\quad \\text{if }s \\neq s_i \\\\ &= 1 \\quad \\text{if } s = s_i\n", "\\end{align*}\n", "\n", "Note that $b(s)$ is a probability distribution that necessarily sums to $1$ over all $s$.\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### POMDPs - a conceptual outline\n", "\n", "The POMDP really has only two modifications to the **problem formulation** compared to the MDP.\n", "\n", "- **Belief state** - In the real world, the current state of an agent is often not known with complete certainty. This makes the concept of a belief vector extremely relevant. It allows the agent to represent different degrees of certainty with which it _believes_ it is in each state.\n", "\n", "- **Evidence percepts** - In the real world, agents often have certain kinds of evidence, collected from sensors. They can use the probability distribution of observed evidence, conditional on state, to consolidate their information. This is a known distribution $P(e\\ |\\ s)$ - $e$ being an evidence, and $s$ being the state it is conditional on.\n", "\n", "Consider the world we used for the MDP. \n", "\n", "![title](images/grid_mdp.jpg)\n", "\n", "#### Using the belief vector\n", "An agent beginning at $(1, 1)$ may not be certain that it is indeed in $(1, 1)$. Consider a belief vector $b$ such that:\n", "\\begin{align*}\n", " b((1,1)) &= 0.8 \\\\\n", " b((2,1)) &= 0.1 \\\\\n", " b((1,2)) &= 0.1 \\\\\n", " b(s) &= 0 \\quad \\quad \\forall \\text{ other } s\n", "\\end{align*}\n", "\n", "By horizontally catenating each row, we can represent this as an 11-dimensional vector (omitting $(2, 2)$).\n", "\n", "Thus, taking $s_1 = (1, 1)$, $s_2 = (1, 2)$, ... $s_{11} = (4,3)$, we have $b$:\n", "\n", "$b = (0.8, 0.1, 0, 0, 0.1, 0, 0, 0, 0, 0, 0)$ \n", "\n", "This fully represents the certainty to which the agent is aware of its state.\n", "\n", "#### Using evidence\n", "The evidence observed here could be the number of adjacent 'walls' or 'dead ends' observed by the agent. We assume that the agent cannot 'orient' the walls - only count them.\n", "\n", "In this case, $e$ can take only two values, 1 and 2. This gives $P(e\\ |\\ s)$ as:\n", "\\begin{align*}\n", " P(e=2\\ |\\ s) &= \\frac{1}{7} \\quad \\forall \\quad s \\in \\{s_1, s_2, s_4, s_5, s_8, s_9, s_{11}\\}\\\\\n", " P(e=1\\ |\\ s) &= \\frac{1}{4} \\quad \\forall \\quad s \\in \\{s_3, s_6, s_7, s_{10}\\} \\\\\n", " P(e\\ |\\ s) &= 0 \\quad \\forall \\quad \\text{ other } s, e\n", "\\end{align*}\n", "\n", "Note that the implications of the evidence on the state must be known **a priori** to the agent. Ways of reliably learning this distribution from percepts are beyond the scope of this notebook." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### POMDPs - a rigorous outline\n", "\n", "A POMDP is thus a sequential decision problem for for a *partially* observable, stochastic environment with a Markovian transition model, a known 'sensor model' for inferring state from observation, and additive rewards. \n", "\n", "Practically, a POMDP has the following, which an MDP also has:\n", "- a set of states, each denoted by $s$\n", "- a set of actions available in each state, $A(s)$\n", "- a reward accrued on attaining some state, $R(s)$\n", "- a transition probability $P(s'\\ |\\ s, a)$ of action $a$ changing the state from $s$ to $s'$\n", "\n", "And the following, which an MDP does not:\n", "- a sensor model $P(e\\ |\\ s)$ on evidence conditional on states\n", "\n", "Additionally, the POMDP is now uncertain of its current state hence has:\n", "- a belief vector $b$ representing the certainty of being in each state (as a probability distribution)\n", "\n", "\n", "#### New uncertainties\n", "\n", "It is useful to intuitively appreciate the new uncertainties that have arisen in the agent's awareness of its own state.\n", "\n", "- At any point, the agent has belief vector $b$, the distribution of its believed likelihood of being in each state $s$.\n", "- For each of these states $s$ that the agent may **actually** be in, it has some set of actions given by $A(s)$.\n", "- Each of these actions may transport it to some other state $s'$, assuming an initial state $s$, with probability $P(s'\\ |\\ s, a)$\n", "- Once the action is performed, the agent receives a percept $e$. $P(e\\ |\\ s)$ now tells it the chances of having perceived $e$ for each state $s$. The agent must use this information to update its new belief state appropriately.\n", "\n", "#### Evolution of the belief vector - the `FORWARD` function\n", "\n", "The new belief vector $b'(s')$ after an action $a$ on the belief vector $b(s)$ and the noting of evidence $e$ is:\n", "$$ b'(s') = \\alpha P(e\\ |\\ s') \\sum_s P(s'\\ | s, a) b(s)$$ \n", "\n", "where $\\alpha$ is a normalising constant (to retain the interpretation of $b$ as a probability distribution.\n", "\n", "This equation is just counts the sum of likelihoods of going to a state $s'$ from every possible state $s$, times the initial likelihood of being in each $s$. This is multiplied by the likelihood that the known evidence actually implies the new state $s'$. \n", "\n", "This function is represented as `b' = FORWARD(b, a, e)`\n", "\n", "#### Probability distribution of the evolving belief vector\n", "\n", "The goal here is to find $P(b'\\ |\\ b, a)$ - the probability that action $a$ transforms belief vector $b$ into belief vector $b'$. The following steps illustrate this -\n", "\n", "The probability of observing evidence $e$ when action $a$ is enacted on belief vector $b$ can be distributed over each possible new state $s'$ resulting from it:\n", "\\begin{align*}\n", " P(e\\ |\\ b, a) &= \\sum_{s'} P(e\\ |\\ b, a, s') P(s'\\ |\\ b, a) \\\\\n", " &= \\sum_{s'} P(e\\ |\\ s') P(s'\\ |\\ b, a) \\\\\n", " &= \\sum_{s'} P(e\\ |\\ s') \\sum_s P(s'\\ |\\ s, a) b(s)\n", "\\end{align*}\n", "\n", "The probability of getting belief vector $b'$ from $b$ by application of action $a$ can thus be summed over all possible evidences $e$:\n", "\\begin{align*}\n", " P(b'\\ |\\ b, a) &= \\sum_{e} P(b'\\ |\\ b, a, e) P(e\\ |\\ b, a) \\\\\n", " &= \\sum_{e} P(b'\\ |\\ b, a, e) \\sum_{s'} P(e\\ |\\ s') \\sum_s P(s'\\ |\\ s, a) b(s)\n", "\\end{align*}\n", "\n", "where $P(b'\\ |\\ b, a, e) = 1$ if $b' = $ `FORWARD(b, a, e)` and $= 0$ otherwise.\n", "\n", "Given initial and final belief states $b$ and $b'$, the transition probabilities still depend on the action $a$ and observed evidence $e$. Some belief states may be achievable by certain actions, but have non-zero probabilities for states prohibited by the evidence $e$. Thus, the above condition thus ensures that only valid combinations of $(b', b, a, e)$ are considered.\n", "\n", "#### A modified rewardspace\n", "\n", "For MDPs, the reward space was simple - one reward per available state. However, for a belief vector $b(s)$, the expected reward is now:\n", "$$\\rho(b) = \\sum_s b(s) R(s)$$\n", "\n", "Thus, as the belief vector can take infinite values of the distribution over states, so can the reward for each belief vector vary over a hyperplane in the belief space, or space of states (planes in an $N$-dimensional space are formed by a linear combination of the axes)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we know the basics, let's have a look at the `POMDP` class." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
class POMDP(MDP):\n",
       "\n",
       "    """A Partially Observable Markov Decision Process, defined by\n",
       "    a transition model P(s'|s,a), actions A(s), a reward function R(s),\n",
       "    and a sensor model P(e|s). We also keep track of a gamma value,\n",
       "    for use by algorithms. The transition and the sensor models\n",
       "    are defined as matrices. We also keep track of the possible states\n",
       "    and actions for each state. [page 659]."""\n",
       "\n",
       "    def __init__(self, actions, transitions=None, evidences=None, rewards=None, states=None, gamma=0.95):\n",
       "        """Initialize variables of the pomdp"""\n",
       "\n",
       "        if not (0 < gamma <= 1):\n",
       "            raise ValueError('A POMDP must have 0 < gamma <= 1')\n",
       "\n",
       "        self.states = states\n",
       "        self.actions = actions\n",
       "\n",
       "        # transition model cannot be undefined\n",
       "        self.t_prob = transitions or {}\n",
       "        if not self.t_prob:\n",
       "            print('Warning: Transition model is undefined')\n",
       "        \n",
       "        # sensor model cannot be undefined\n",
       "        self.e_prob = evidences or {}\n",
       "        if not self.e_prob:\n",
       "            print('Warning: Sensor model is undefined')\n",
       "        \n",
       "        self.gamma = gamma\n",
       "        self.rewards = rewards\n",
       "\n",
       "    def remove_dominated_plans(self, input_values):\n",
       "        """\n",
       "        Remove dominated plans.\n",
       "        This method finds all the lines contributing to the\n",
       "        upper surface and removes those which don't.\n",
       "        """\n",
       "\n",
       "        values = [val for action in input_values for val in input_values[action]]\n",
       "        values.sort(key=lambda x: x[0], reverse=True)\n",
       "\n",
       "        best = [values[0]]\n",
       "        y1_max = max(val[1] for val in values)\n",
       "        tgt = values[0]\n",
       "        prev_b = 0\n",
       "        prev_ix = 0\n",
       "        while tgt[1] != y1_max:\n",
       "            min_b = 1\n",
       "            min_ix = 0\n",
       "            for i in range(prev_ix + 1, len(values)):\n",
       "                if values[i][0] - tgt[0] + tgt[1] - values[i][1] != 0:\n",
       "                    trans_b = (values[i][0] - tgt[0]) / (values[i][0] - tgt[0] + tgt[1] - values[i][1])\n",
       "                    if 0 <= trans_b <= 1 and trans_b > prev_b and trans_b < min_b:\n",
       "                        min_b = trans_b\n",
       "                        min_ix = i\n",
       "            prev_b = min_b\n",
       "            prev_ix = min_ix\n",
       "            tgt = values[min_ix]\n",
       "            best.append(tgt)\n",
       "\n",
       "        return self.generate_mapping(best, input_values)\n",
       "\n",
       "    def remove_dominated_plans_fast(self, input_values):\n",
       "        """\n",
       "        Remove dominated plans using approximations.\n",
       "        Resamples the upper boundary at intervals of 100 and\n",
       "        finds the maximum values at these points.\n",
       "        """\n",
       "\n",
       "        values = [val for action in input_values for val in input_values[action]]\n",
       "        values.sort(key=lambda x: x[0], reverse=True)\n",
       "\n",
       "        best = []\n",
       "        sr = 100\n",
       "        for i in range(sr + 1):\n",
       "            x = i / float(sr)\n",
       "            maximum = (values[0][1] - values[0][0]) * x + values[0][0]\n",
       "            tgt = values[0]\n",
       "            for value in values:\n",
       "                val = (value[1] - value[0]) * x + value[0]\n",
       "                if val > maximum:\n",
       "                    maximum = val\n",
       "                    tgt = value\n",
       "\n",
       "            if all(any(tgt != v) for v in best):\n",
       "                best.append(tgt)\n",
       "\n",
       "        return self.generate_mapping(best, input_values)\n",
       "\n",
       "    def generate_mapping(self, best, input_values):\n",
       "        """Generate mappings after removing dominated plans"""\n",
       "\n",
       "        mapping = defaultdict(list)\n",
       "        for value in best:\n",
       "            for action in input_values:\n",
       "                if any(all(value == v) for v in input_values[action]):\n",
       "                    mapping[action].append(value)\n",
       "\n",
       "        return mapping\n",
       "\n",
       "    def max_difference(self, U1, U2):\n",
       "        """Find maximum difference between two utility mappings"""\n",
       "\n",
       "        for k, v in U1.items():\n",
       "            sum1 = 0\n",
       "            for element in U1[k]:\n",
       "                sum1 += sum(element)\n",
       "            sum2 = 0\n",
       "            for element in U2[k]:\n",
       "                sum2 += sum(element)\n",
       "        return abs(sum1 - sum2)\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(POMDP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `POMDP` class includes all variables of the `MDP` class and additionally also stores the sensor model in `e_prob`.\n", "
\n", "
\n", "`remove_dominated_plans`, `remove_dominated_plans_fast`, `generate_mapping` and `max_difference` are helper methods for `pomdp_value_iteration` which will be explained shortly." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To understand how we can model a partially observable MDP, let's take a simple example.\n", "Let's consider a simple two state world.\n", "The states are labelled 0 and 1, with the reward at state 0 being 0 and at state 1 being 1.\n", "
\n", "There are two actions:\n", "
\n", "`Stay`: stays put with probability 0.9 and\n", "`Go`: switches to the other state with probability 0.9.\n", "
\n", "For now, let's assume the discount factor `gamma` to be 1.\n", "
\n", "The sensor reports the correct state with probability 0.6.\n", "
\n", "This is a simple problem with a trivial solution.\n", "Obviously the agent should `Stay` when it thinks it is in state 1 and `Go` when it thinks it is in state 0.\n", "
\n", "The belief space can be viewed as one-dimensional because the two probabilities must sum to 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's model this POMDP using the `POMDP` class." ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "# transition probability P(s'|s,a)\n", "t_prob = [[[0.9, 0.1], [0.1, 0.9]], [[0.1, 0.9], [0.9, 0.1]]]\n", "# evidence function P(e|s)\n", "e_prob = [[[0.6, 0.4], [0.4, 0.6]], [[0.6, 0.4], [0.4, 0.6]]]\n", "# reward function\n", "rewards = [[0.0, 0.0], [1.0, 1.0]]\n", "# discount factor\n", "gamma = 0.95\n", "# actions\n", "actions = ('0', '1')\n", "# states\n", "states = ('0', '1')" ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [], "source": [ "pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have defined our `POMDP` object." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## POMDP VALUE ITERATION\n", "Defining a POMDP is useless unless we can find a way to solve it. As POMDPs can have infinitely many belief states, we cannot calculate one utility value for each state as we did in `value_iteration` for MDPs.\n", "
\n", "Instead of thinking about policies, we should think about conditional plans and how the expected utility of executing a fixed conditional plan varies with the initial belief state.\n", "
\n", "If we bound the depth of the conditional plans, then there are only finitely many such plans and the continuous space of belief states will generally be divided inte _regions_, each corresponding to a particular conditional plan that is optimal in that region. The utility function, being the maximum of a collection of hyperplanes, will be piecewise linear and convex.\n", "
\n", "For the one-step plans `Stay` and `Go`, the utility values are as follows\n", "
\n", "
\n", "$$\\alpha_{|Stay|}(0) = R(0) + \\gamma(0.9R(0) + 0.1R(1)) = 0.1$$\n", "$$\\alpha_{|Stay|}(1) = R(1) + \\gamma(0.9R(1) + 0.1R(0)) = 1.9$$\n", "$$\\alpha_{|Go|}(0) = R(0) + \\gamma(0.9R(1) + 0.1R(0)) = 0.9$$\n", "$$\\alpha_{|Go|}(1) = R(1) + \\gamma(0.9R(0) + 0.1R(1)) = 1.1$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The utility function can be found by `pomdp_value_iteration`.\n", "
\n", "To summarize, it generates a set of all plans consisting of an action and, for each possible next percept, a plan in U with computed utility vectors.\n", "The dominated plans are then removed from this set and the process is repeated till the maximum difference between the utility functions of two consecutive iterations reaches a value less than a threshold value." ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ POMDP-VALUE-ITERATION(_pomdp_, _ε_) __returns__ a utility function \n", " __inputs__: _pomdp_, a POMDP with states _S_, actions _A_(_s_), transition model _P_(_s′_ | _s_, _a_), \n", "      sensor model _P_(_e_ | _s_), rewards _R_(_s_), discount _γ_ \n", "     _ε_, the maximum error allowed in the utility of any state \n", " __local variables__: _U_, _U′_, sets of plans _p_ with associated utility vectors _αp_ \n", "\n", " _U′_ ← a set containing just the empty plan \\[\\], with _α\\[\\]_(_s_) = _R_(_s_) \n", " __repeat__ \n", "   _U_ ← _U′_ \n", "   _U′_ ← the set of all plans consisting of an action and, for each possible next percept, \n", "     a plan in _U_ with utility vectors computed according to Equation(__??__) \n", "   _U′_ ← REMOVE\\-DOMINATED\\-PLANS(_U′_) \n", " __until__ MAX\\-DIFFERENCE(_U_, _U′_) < _ε_(1 − _γ_) ⁄ _γ_ \n", " __return__ _U_ \n", "\n", "---\n", "__Figure ??__ A high\\-level sketch of the value iteration algorithm for POMDPs. The REMOVE\\-DOMINATED\\-PLANS step and MAX\\-DIFFERENCE test are typically implemented as linear programs." ], "text/plain": [ "" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('POMDP-Value-Iteration')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a look at the `pomdp_value_iteration` function." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def pomdp_value_iteration(pomdp, epsilon=0.1):\n",
       "    """Solving a POMDP by value iteration."""\n",
       "\n",
       "    U = {'':[[0]* len(pomdp.states)]}\n",
       "    count = 0\n",
       "    while True:\n",
       "        count += 1\n",
       "        prev_U = U\n",
       "        values = [val for action in U for val in U[action]]\n",
       "        value_matxs = []\n",
       "        for i in values:\n",
       "            for j in values:\n",
       "                value_matxs.append([i, j])\n",
       "\n",
       "        U1 = defaultdict(list)\n",
       "        for action in pomdp.actions:\n",
       "            for u in value_matxs:\n",
       "                u1 = Matrix.matmul(Matrix.matmul(pomdp.t_prob[int(action)], Matrix.multiply(pomdp.e_prob[int(action)], Matrix.transpose(u))), [[1], [1]])\n",
       "                u1 = Matrix.add(Matrix.scalar_multiply(pomdp.gamma, Matrix.transpose(u1)), [pomdp.rewards[int(action)]])\n",
       "                U1[action].append(u1[0])\n",
       "\n",
       "        U = pomdp.remove_dominated_plans_fast(U1)\n",
       "        # replace with U = pomdp.remove_dominated_plans(U1) for accurate calculations\n",
       "        \n",
       "        if count > 10:\n",
       "            if pomdp.max_difference(U, prev_U) < epsilon * (1 - pomdp.gamma) / pomdp.gamma:\n",
       "                return U\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(pomdp_value_iteration)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This function uses two aptly named helper methods from the `POMDP` class, `remove_dominated_plans` and `max_difference`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try solving a simple one-dimensional POMDP using value-iteration.\n", "
\n", "Consider the problem of a user listening to voicemails.\n", "At the end of each message, they can either _save_ or _delete_ a message.\n", "This forms the unobservable state _S = {save, delete}_.\n", "It is the task of the POMDP solver to guess which goal the user has.\n", "
\n", "The belief space has two elements, _b(s = save)_ and _b(s = delete)_.\n", "For example, for the belief state _b = (1, 0)_, the left end of the line segment indicates _b(s = save) = 1_ and _b(s = delete) = 0_.\n", "The intermediate points represent varying degrees of certainty in the user's goal.\n", "
\n", "The machine has three available actions: it can _ask_ what the user wishes to do in order to infer his or her current goal, or it can _doSave_ or _doDelete_ and move to the next message.\n", "If the user says _save_, then an error may occur with probability 0.2, whereas if the user says _delete_, an error may occur with a probability 0.3.\n", "
\n", "The machine receives a large positive reward (+5) for getting the user's goal correct, a very large negative reward (-20) for taking the action _doDelete_ when the user wanted _save_, and a smaller but still significant negative reward (-10) for taking the action _doSave_ when the user wanted _delete_. \n", "There is also a small negative reward for taking the _ask_ action (-1).\n", "The discount factor is set to 0.95 for this example.\n", "
\n", "Let's define the POMDP." ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [], "source": [ "# transition function P(s'|s,a)\n", "t_prob = [[[0.65, 0.35], [0.65, 0.35]], [[0.65, 0.35], [0.65, 0.35]], [[1.0, 0.0], [0.0, 1.0]]]\n", "# evidence function P(e|s)\n", "e_prob = [[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]], [[0.8, 0.2], [0.3, 0.7]]]\n", "# reward function\n", "rewards = [[5, -10], [-20, 5], [-1, -1]]\n", "\n", "gamma = 0.95\n", "actions = ('0', '1', '2')\n", "states = ('0', '1')\n", "\n", "pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have defined the `POMDP` object.\n", "Let's run `pomdp_value_iteration` to find the utility function." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "utility = pomdp_value_iteration(pomdp, epsilon=0.1)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYEAAAD8CAYAAACRkhiPAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvNQv5yAAAIABJREFUeJzsnXd81dX9/5+fm733JiEBEkYYYRNkrwABW9yjWq2tP7WuVq24v0WtiKNaF1K0al3ggNIEhIDKEhlhRCBkD7L33rnn98eH+ykrZN2bm3Gej0cekuQzziee+359zjnv83orQggkEolEMjDRmbsBEolEIjEfUgQkEolkACNFQCKRSAYwUgQkEolkACNFQCKRSAYwUgQkEolkACNFQCKRSAYwUgQkEolkACNFQCKRSAYwluZuwPl4enqK4OBgczdDIpFI+hTx8fElQgivrpzbq0QgODiYI0eOmLsZEolE0qdQFCWrq+fK6SCJRCIZwEgRkEgkkgGMFAGJRCIZwEgRkEgkkgGMFAGJRCIZwEgRkEgkkgGMFAGJRCIZwEgRkEgkkgGMFAGJRCIZwEgRkEgkkgHMgBaBF198kfDwcMaOHUtERAQHDx40d5MkfYBNmzahKApnzpy54nGOjo491CJJW1hYWBAREUF4eDjjxo3j9ddfR6/XX/GczMxMRo8e3e4xn3/+uTGbajYGrAgcOHCAmJgYjh49SkJCAjt37iQwMNDczZL0Ab744gtmzJjBl19+ae6mSNrBzs6O48ePc+rUKeLi4ti6dSt//etfu31dKQL9gPz8fDw9PbGxsQHA09MTf39/Vq1axeTJkxk9ejR33303QggSExOZMmWKdm5mZiZjx44FID4+ntmzZzNx4kSioqLIz883y/NIeoaamhr279/PBx98oIlAfn4+s2bNIiIigtGjR7N3794LzikpKSEyMpLY2FhzNFlyDm9vb9atW8fbb7+NEILW1lYee+wxJk+ezNixY3n//fcvOaetY1auXMnevXuJiIjg73//e4eu1WsRQvSar4kTJ4qeorq6WowbN06EhoaKe++9V/z4449CCCFKS0u1Y37zm9+ILVu2CCGEGDdunEhLSxNCCLF69Wrx/PPPi6amJhEZGSmKioqEEEJ8+eWX4s477+yxZ5D0PP/+97/F7373OyGEEJGRkSI+Pl68+uqr4oUXXhBCCNHS0iKqqqqEEEI4ODiIgoICMWXKFLFjxw6ztXkg4+DgcMnPXF1dRUFBgXj//ffF888/L4QQoqGhQUycOFGkp6eLjIwMER4eLoQQbR7zww8/iOjoaO2abR3XUwBHRBfjrlGspBVF+RBYBhQJIUaf+5k7sAEIBjKBG4QQ5ca4nzFwdHQkPj6evXv38sMPP3DjjTeyevVqnJycWLNmDXV1dZSVlREeHs7y5cu54YYb2LhxIytXrmTDhg1s2LCBpKQkTp48ycKFCwH1rcHPz8/MTyYxJV988QUPP/wwADfddBNffPEFy5cv53e/+x3Nzc38+te/JiIiAoDm5mbmz5/PO++8w+zZs83ZbMl5qDETduzYQUJCAl9//TUAlZWVpKSkEBYWph3b1jHW1tYXXLOt40JCQnrikbpHV9Xj/C9gFjABOHnez9YAK8/9eyXwcnvXCRsdJhpbGk2ilO3x1VdfiQULFghvb2+RnZ0thBDiueeeE88995wQQojU1FQxfvx4kZSUJCZMmCCEECIhIUFMmzbNLO2V9DwlJSXC1tZWBAUFicGDB4tBgwaJwMBAodfrRW5urli3bp0YPXq0+Pjjj4UQQtjb24vbb79dPPHEE2Zu+cDl4pFAWlqacHd3F3q9XlxzzTXiu+++u+Sc80cCbR1z8UigreN6gvz87o0EjLImIITYA5Rd9ONfAR+f+/fHwK/bu05yaTKeazy5buN1fHT8IwprCo3RvMuSlJRESkqK9v3x48cZPnw4oK4P1NTUaKoOMHToUCwsLHj++ee58cYbARg+fDjFxcUcOHAAUN/8Tp06ZbI2S8zL119/ze23305WVhaZmZmcPXuWkJAQ9uzZg7e3N3/4wx+46667OHr0KACKovDhhx9y5swZVq9ebebWS4qLi7nnnnu4//77URSFqKgo3nvvPZqbmwFITk6mtrb2gnPaOsbJyYnq6up2jzMlubnw0EPQ3cGGKSuL+Qgh8gGEEPmKoni3d8JQ96HMGz2P2JRYvkn8BgWFyQGTWRa6jOiwaMb7jkdRFKM0rqamhgceeICKigosLS0ZNmwY69atw9XVlTFjxhAcHMzkyZMvOOfGG2/kscceIyMjAwBra2u+/vprHnzwQSorK2lpaeHhhx8mPDzcKG2U9C6++OILVq5cecHPrr32Wu644w4cHBywsrLC0dGRTz75RPu9hYUFX375JcuXL8fZ2Zn77ruvp5s9oKmvryciIoLm5mYsLS257bbb+POf/wzA73//ezIzM5kwYQJCCLy8vNi8efMF57d1zNixY7G0tGTcuHHccccdPPTQQ+1ey1hkZ8PLL8P69aDXw223wb/+1fXrKeLc/Fh3URQlGIgR/1sTqBBCuJ73+3IhhNtlzrsbuBsgKChoYlZWFkIIjhccJyY5htiUWA7lHkIg8HfyZ+mwpSwLW8aCIQtwsHYwStslEomkt5OZCS+99L+Af+edsHKlOhJQFCVeCDGpK9c1pQgkAXPOjQL8gB+FEMOvdI1JkyaJy9UYLqwpZFvqNmJTYtmeup3qpmpsLGyYEzyHZWHLiA6NJsStDyzAmIGdZeos3QJ3dzO3RCLpOgO5H6elwd/+Bp98Ajod/P738PjjEBT0v2N6qwi8ApQKIVYrirIScBdC/OVK12hLBM6nqbWJfdn7iEmOISY5hpQydV5/lNcookOjWRa2jOmB07HUmXKmq+8w59gxAH4cP97MLZFIus5A7MdJSWrw/+wzsLKCu++Gv/wFAgIuPdbsIqAoyhfAHMATKASeAzYDG4EgIBu4Xghx8eLxBXREBC4muTSZ2ORYYlNi2Z21mxZ9C662riwetphloctYPGwxHvYeXXiq/sFA/PBI+h8DqR+fPg0vvghffgk2NnDvvfDoo3Cl7PPuiIBRXpeFEDe38av5xrj+lQjzCCMsMow/Rf6JqsYq4tLiiEmJYWvKVr48+SU6RUfkoEhtlDDae7TRFpclEonEWPzyC7zwAnz1Fdjbq4H/kUfAu92Umu7Rr+ZMnG2cuXbUtVw76lr0Qs+RvCPa4vKT3z/Jk98/SZBLkCYIc4PnYmdlZ+5mSySSAczx4/D88/Dtt+DkBE88AX/6E3h69sz9e5UItOfu1xl0io4pAVOYEjCFVXNXkVedx9aUrcQkx/DxiY9578h72FnaMX/IfC0FdZDzIKPdXyKRSK7EkSNq8N+yBVxc4Nln1bz/nl77NtrCsDFQFEUA6HQ6fH19iYqK4qmnnmLo0KFGvU9DSwO7M3eri8spMWRWZAIwzmecNkqYEjAFC52FUe9rDpLq6gAYbm9v5pZIJF2nP/Xjn3+GVatg2zZwc1Pf+h94AFxd2z+3Lcy+MGwsDCLQFs7OzkRERPDII4+wfPlyo8ztCyFILEnUpo32Z++nVbTiae/JkmFLWBa2jEVDF+Fq243/QxKJZMCzb58a/OPiwMNDnfO/7z5wdu7+tfuNCLi6uopp06aRnZ1NWloaTU1N7Z5jZWVFSEgIt9xyC3/6059w7uZftLy+nO1p24lJjmFb6jbK6suwUCyYOXimNkoY7jG8zywu/7ekBIDlPTXBKJGYgL7aj4WA3bvV4P/DD+oi72OPwT33gDFrDvUbETh/JGBjY4OXlxe+vr5YW1tTX19PRkYGlZWVtNdmRVHw9PRk9uzZPPfcc+1WCWqLVn0rP+f8TGxKLDHJMfxS9AsAQ9yGaOsIswfPxsbSpkvX7wkGUmqdpP/S1/qxELBrlxr89+4FX191g9fdd6uZP8am34hASEiIWLRoEQkJCWRkZFBWVqYZMhmwtrbGw8MDV1dXdDod5eXlFBYW0tra2u717ezsGD16NA888AC33HILFhadm/PPrswmNjmWmJQYvs/4noaWBhysHFg0dBHRodEsDV2Kn1PvspLuax8eieRy9JV+LARs364G/wMH1I1dK1fCXXeBnQkTEfuNCFxus1hZWRlxcXHs2bOH48ePk5GRQUlJySXiYGVlhYuLC/b29uj1ekpLS6mvr2/3nhYWFvj7+3Pttdfy7LPP4uZ2ib3RZalrruP7jO81UcipygFgot9EloUtY1nYMib4TUCnmLd4W1/58EgkV6K392MhIDZWDf6HD6uWDk88ofr72PTAREG/FoG2qKqqYufOnezZs4djx46RlpZGSUkJjY2NFxxnaWmJg4MDNjY21NfXU1tb26FUVFdXVyIjI3n22WeZNm3aFY8VQpBQmKBNG/2c8zMCgY+DD9Gh0USHRbNwyEKcbJw69GzGpLd/eCSSjtBb+7Fer6Z4rloFx46pZm5PPgm33w4X1Z0xKQNSBNqitraW77//nt27d3P06FFSU1MpLi6moaHhguMsLCywtbVFp9NRX19PS0tLu9e2sbFh2LBh3H333dx3331YWl5+m0VxbTHfpX5HbEos36V+R2VjJVY6K+YEz9EWl4e6GzfttS1664dHIukMva0f6/Xq5q7nn4eEBBg2DJ56Cm69VfX56WmkCHSAhoYGdu/ezY8//kh8fDwpKSkUFhZeMmWk0+mwsrJCr9dfMuV0OXQ6Hd7e3ixdupQXXnjhkvKSza3N/HT2J21PwpmSMwAM9xiuOaDOCJqBlYVpes7Zc+IXaGtrkutLJD1Bb+nHra2wcaNq73D6NAwfDk8/DTfdBG28E/YIUgS6QVNTE/v27eOHH37gyJEjJCcnU1BQQN25zSkGFEVBp9Oh1+vbzU4CcHBwYMKECTz11FNERUVpP08rSyM2RTW8+zHzR5pam3C2cSZqaBTLwpaxZNgSvBy8jP6cEomk67S0wBdfqMZuSUkwahQ88wxcfz10Mr/EJPQbEXBxcRGPP/44V199NeHh4WbNxW9paeHgwYPs3LmTw4cPk5ycTF5e3mVLximK0iFhsLS0JCgoiN/+9rc8/vjjNNHEzvSdmigU1BSgoDB10FQtBXWcz7hu/R02FBUBcKOpXagkEhNirn7c3AyffqoG/7Q0GDtWDf7XXKN6+/cW+o0IXGnHsJWVFa6urvj7+zN27FiWL1/OwoULce3OXusuoNfriY+PZ+fOnRw8eJAzZ86Ql5d3Qb3RzqAoCi4uLsybN49bH7qVX/S/EJMSw5E8dUQ0yHmQurgcGs38IfOxt+pcknFvm0uVSLpCT/fjpib4+GPVzz8zEyZMUIP/1Vf3ruBvYECIQEfQ6XTY2dnh6elJSEgIM2bMYMWKFYwbN67TewI6i16vJyEhgbi4OA4ePEhiYiI5OTlUV1d3aJRwMVbWVvgM9sF3iS9nfM9Q01SDraUtc4PnamsJg10Ht3sdKQKS/kBP9ePGRvjwQ7WM49mzMHkyPPccLF0KvdkkoF+JgCFrx9bWFmtray0Dp7GxkZqaGhobGzu0MexKGAqC+/v7M3r0aBYvXszVV1+Nuwns+4QQJCYmsmPHDn7++WdOnz7N2bNnqaqq6rxrqgI6Bx36kXqYD6ODRmvTRtMGTbtsNTUpApL+gKn7cX29Wrj95ZchNxciI9Xgv2hR7w7+BvqNCDg4OAhfX18qKyupra2lqanpioHS0tISKyurC8SiqamJhoYGWlpauvQGbkBRFGxtbXFzc2PIkCFMnTqVa6+9lsmTJ7eZGtpZkpOTiYuL48CBA5w6dYrs7GwqKio6Lw6WoAvUMf+e+dy55E6ihkXhbqcKmhQBSX/AVP24rg7efx/WrIGCApg5Uw3+8+b1jeBvoN+IwOWygxobG0lOTubMmTOkpKSQlZVFTk4OhYWFlJaWUlVVpQnGlZ7FwsICS0tLLYC3tLTQ0tLS7VGFhYUFjo6O+Pn5MXLkSKKiorjmmmvw8up6hk9WVhbbt29n//79nDx5kuzsbMrLyzvXVh24ertif+2vCPntH9k3eXKX2yORmBtji0BNDbz3Hrz6KhQVqUH/2Wdh9myjXL7H6dci0Bnq6+s5c+YMiYmJpKWlkZmZSV5eHgUFBZSVlVFVVUVdXR3Nzc3tCoZhDaG1tbXDaaFtoSgKVlZWuLm5MXjwYCZPnsy1117LzJkzOzWqyM3NJS4ujn379pGQkEBmZiZlZWWdEgdnZ2eioqJ4++238ZYZQ5I+Qsk5R2HPbm7DraqCd96B116D0lJ1uueZZ2DGDGO00nxIEegC1dXVJCYmkpSURGpqKtnZ2eTk5FBUVERZWRnV1dWaYFwJ3blUASFEt4TCcC07Ozt8fHwYOXIk8+bN44YbbmDQoCtXPCsqKiIuLo69e/eSkJCgWWh0dFrJysqK4cOHs3r1aqKjo7v1DBJJb6SiAt56C/7+dygvVxd6n3kG2nGE6bVUVFTw1VdfsWvXLk6fPs0vv/wiRcCUlJeXc/r0aZKSkkhLSyM7O5u8vDyKioooLy+nurqa+vr6Du0wNgaWlpY4OzszePBgJkyYwIoVK4iKirpkVFFWVsZTGzeSfPAg9clJnDx9kuqKjqeyenp6cvPNN/Paa69hZY698BLJOT7KzwfgDr/OufSWlcGbb6pflZVqiuczz8CkLoXLnqG5uZm9e/eyefNm4uPjtenghoaGK436pQj0FoqLizXBSE9P1wSjuLhYEwzDwrUpURQFGxsbWp2csAsO5rGrr+bWW28lJCSEyspK/vXtv/h629ecOHqCmuwa6KB+2djYMHHiRN577z3Gjh1r0meQSAx0dk2gpER963/rLaiuVjd3Pf009Ib8iIyMDDZu3MjevXtJSUmhqKiI2traDiezWFpaYm9vj4eHB8HBwQwdOpT169f3DxFQFEVYWVlhZWWFjY0Ntra22Nvb4+joiKOjIy4uLri6uuLm5oanpydeXl54e3vj6+uLv78/fn5+2NnZ9YmqX0IICgoKNMHIyMjQBKOkpITy8nJqamraU/9uY2FhgZ2dHU4eTuj8dRRZFdHc2AzpQCnQgRklRVHw8fHhkUce4dFHHzVZWyUDl46KQFGROt//zjtq5s/116vBf8yYnmgl1NXVsWXLFrZv384vv/xCTk4OlZWV7WY6GtDpdNjY2ODg4ICzs7MW7ywsLCgtLaW4uJiKigrq6uoufpHsHyJga2srvL29qa+vp7Gxkaampi5l8Oh0OiwsLC4REwcHh0vExMPDAy8vL3x8fPD19cXPzw8/Pz8cHBx6jZjo9XrOnj3L6dOnSUlJISMjg7Nnz5Kfn09JSQkVFRVG20PRJjrQWejQt+ihg13G2tqa6dOn8/nnn19irCeRdIb2RCA/H155BdauVTd83XST6uo5apTx2iCE4MiRI3z77bccOnSI9PR0rW5JR0b2Bv8xQ1q7ra0tlpaWtLS0UFtb290Xvt4rAoqiZALVQCvQcqWGXmk6qLm5mZKSEvLz88nPz6ewsJDi4mJKSkooLS2lvLycyspKqqurqampoa6ujrq6ugvEpLNZPoqiXLAX4XJi4uLigru7uyYm3t7e+Pj44OfnR0BAAI6Ojj0qJnq9noyMDE0wXjt0iMbCQkIbGjTBqK2tpbGxsfP7EYyAoih4eHjw2GOP8eijj2oL6xLJlWhLBHJy1Bz/detUk7dbb1WDf1hY5+9RXFzMV199xffff09iYiKFhYVUV1e3m01owBDkDZ93vV5v9M+Y4QXX1tYWJycnfHx8GDZsGF999VWvF4FJQoiS9o7tiTWBlpYWSktLLxCToqIiSkpKKCsr08SkqqqKmpoaamtrLxiZNDc3d0lMzh+Z2NjYtDkycXd3v0BM/P398ff3x9nZuUticqU3qJaWFlJTU0lMTCQlJYXMzExycnIoKCigtLSUyspKampq2t2DYQysrKwYO3Ysr7/+OjNmzJDiILmAi/txdjasXg0ffKB6+99+u1rJa9iwy5/f0tJCXFwcMTExHD16VNuY2dDQYJaXofOxsrLC1tYWZ2dnfHx8CA0NZcKECUyePJkJEybg4uLS7jV6dYpobxMBY6HX6ykrKyMvL4+8vLwLRiYGMamoqLhETBoaGi6Y5uqqmBhGJnZ2djg4OODk5ISTk9Ml01wuHh54+/gQHBDAoEGDcHFx6ZKYNDU1kZycTGJiIqmpqWRkZJCYnkja2TRKS0tpqmlSF5dN8Hny9vbm1ltv5emnnzaJtYek91N3bpqkMNuCl16Cjz5Sf/6736k1fJubU9iwYQM//fQTKSkpFBcXa/PmPT3lbWFhgb29Pc7Ozvj6+hIaGsqkSZOYOnUqERERODo6Gv2evV0EMoBy1Jnk94UQ69o6ti+JgLHQ6/WUl5dfMjIpLi6+YJqrLTFpbm7usphYWlpqIxODmDg6Ol6wIOXh4YGnp+cF01z+/v64u7tfICZFtUVsS9lGbEos2xK3UZNbg0WZBcH6YLyavbCutaaqrIrCwkLKy8tpbGw02ofTwcEBb29vJk+ezF133cX8+fNNbhgo6TlqampYu3Yna9e6k5Y2HWjFwuJftLb+DThr8vsbpl9cXFzw8/MjNDSUqVOnEhkZybhx47DtBQWbersI+Ash8hRF8QbigAeEEHvO+/3dwN0AQUFBE7Oyskzanv6KEILKykry8vI0QSkuLmZ7Rga15eX4NzVdMDKpqakxqphYW1trYmLvYA/WUKfUUSbKqNHVgD14eXoxMXQi88bMY96YeQwOHIy1tTVJSUnaLm+DLUhqaio5OTlGW+jW6XQ4OzsTHBzMwoULeeihhwgICDDKtSVdRwjBgQMH2LRpE4cOHSIlJYWysrLzpiDDgKeAW4Em4H3gFSCvy/c0vPwYgnpYWBjTp0/nqquuIjw8HOueLA5sJHq1CFxwM0X5P6BGCPHq5X4/EEcCpqaz+dVCCKqrq8nNzb1kzcSQutqemHR2CH5x1oQmJufSg52dnXFxccHOzk6zBbm4LKixsLS0xMXFhbFjx3Lbbbdxyy23YGNjY5J7DQSys7N588032bJlCzk5OZ0YAY5CDf43AfXAe8CrQOElRxrSKl1cXPD399eC+rx58xg+fLjRDB97M71WBBRFcQB0Qojqc/+OA1YJIb673PFSBIyPuVxEhRDU1tZqI5OCggKKiorIyc/hRMYJknOTyS3KpamuCRrBRthgrbeGFmhtbtVGJuZetGsLa2tr3N3dmTJlCo8//jjTp083d5N6jMrKSj788EO2bt1KQkICZWVlRtz8OAZ4GrgOqMPLayPTp//ML2GuuE6bxpEVK3pN6nZvojeLwBBg07lvLYHPhRAvtnW8FAHj05utpPVCT3xePDHJMcSmxBKfHw9AoHOgVjhnXsg89E168vPzNTNAw8iktLSUsrIyKioqtPTg2tpa6urqqK+vp6amxuQ7s6+EoihYW1vj5eVFZGQkDz/8MNOmTet1mU/19fVs2bKF//znPxw/fpzCwkItjdiYWFtb4+Pjw4QJE5g7dy7R0dEMHToURVE4dgyefx42bQInJ3jwQXj4YfD0VM/tzf24N9BrRaCzSBEwPn3pw5NXncfWlK3EpsQSlxZHbXMtdpZ2zAuZp4lCoEtgl6+vLjCu5b333iMrK8ukO7E7i8GSfPjw4cyZM4fIyEjGjx+Pv79/p32b6urq2L59O9u3byc+Pl4rYtTU1GT0Z9bpdDg4ODBixAjuvfdebrvttk5Nvxw+rAb///4XXFzUwP/QQ+DmduFxfakfmwMpApI26asfnsaWRnZn7SYmOYaY5BgyKjIAGOszVqumNjVgKha67mcBnTp1imeffZbvv/+eysrKTq1nGIKgYa+Hs7MzDg4OVFRUkJaWRlVVlVlHI11Fp9Ph6OjIoEGDCA8PZ86cOVx//fXdqpNxPgcOqMF/2zY14P/5z/DAA6oQXI6+2o97CikCkn6NEIIzJWe0aaN92ftoFa142HmwJHQJy0KXETUsCldbV6Pds7a2ljVr1vDvf/+bs2fPdjqQW1hY4O7uTnBwMGPHjuWqq65i0aJFBAQEUFVVRVxcHDt37uTHH38kPT2dpnN++T2JYUe8g4MDnp6e+Pn5aenBHh4eF+yC9/X1JSAgAF9f326lRO7dC6tWwc6d6lTPI4/AH/+oTgFJuo4UAcmAory+nB1pO4hJiWFrylbK6suwUCyYETRDmzYa4TnC6AuIQgj++9//smbNGo4dO0ZdXZ1Rr9+X0Ol0mqWKwZ/Lzs7uArNHNze3c7vgPaioiGDXrumcOuWNh0cLjzwieOABK0ywb2pAIkVA0iavZmcD8GhQkJlbYhpa9a0czD2ojRISChMAGOI2RJs2mj14NjaWHUvzNLi77tu3jz179mhOkGVlZVoZU3Nx/pu7t7c3YWFhzJw5kxtuuIHAwEAOHjzIp59+yp49e8jJyaG6utpo2VXn+2hZWFhc4o/T2trahtnjAuBZYCZqbv/LwD9R0z6vbPbo5OSkjUyyrKywd3Pj6qFD8fb21jYt+vv7Y29vb5Rn7MtIEZC0yUCbS82uzGZrylZikmPYlbGLhuYGbOtsCW8Mx7nAmfrceory1WJABk8oY38GDEHSWAZijo6OBAcHEx4ezrRp01i4cCGjRo3q0kinoqKCr7/+mi1btnDixAmKi4tpaGgw2t9ADejW2Nr+moaGx2hoGI+1dSEhIRsJCfkea2v171FdXX2B2aNhr0lzc3OX/LnOF5OLzR7PF5OLnYPP3wXf02aPxkSKgKRN+psI6PV60tLSOHToED/99BMnT54kNzeXsrIyzSvGmBkwhk1shsVfRVEuCVodCfSGdFFbW1taW1s7VVhIUZRLAqKiKDg5OTFo0CBGjhzJlClTWLhwIePGjet2CqoQgmPHjrFhwwZ2795NWloalZWVHayctwz1zX8ykAX8DfgIdbfvhZz/d/Xw8NDWHYKDgxk2bBhhYWF4enpSUlLCbfv20Vxayl329hQXF3fI7LGr/lyWlpYXWKpcSUwutlQZNGgQTk5OPS4mUgQkbdLbRaClpYWkpCQOHTrE4cOHOXXqFDk5OZSXl2s1no2+YUwHWAD2YOliqQYe72DsmuwoLfnqFwU+AAAgAElEQVRf4Y6uVnsaNmwY06ZN47rrrmPMmDHtBoS8vDxeeukltmzZQm5ubodEzGBZfPGxiqLg6OiIv78/I0aMYMqUKSxYsIBJkyYZfX9CbW3tuf0F/2XPHjcKC/+AXh+BWpHoReDfdLhk3RVQFAWsrLCwsyPo3IK1j48PgwYNIiQkhGHDhjFy5EhCQ0Mvm54qhLjA7LGgoMDsZo+GXfDni4mhpolBDDtj9ihFQNImPS0CjY2NnDp1imPHjhEfH8+pU6c4e/asVg3JFEFdURTtrdLDw0ObMw4MDCQkJISAgABycnKIj4/n5MmT5OXlUVlZ2eGpIIMtgaurK4MGDSIiIoKlS5eyZMkSk1lKNDc38/HHH7N27VpOnz7dIZsMw9/BcP7Fz+bg4ICfnx/Dhw9n0qRJLFiwgMjIyC6b7en18M03aqrnL7+oNs5PPw233AKGrQ1CCJKSkti4cSO7d+8mMTHR6AaC53N+ZS4XFxc8PT3x9fUlMDCQ4OBgQkNDGTVqFEOGDOm0KAohKC8v13bBn2+pYti42BNmj/b29tjb21/gHPz1119LEZBcniUJ6kLpti7WA66vrychIYFjx45x4sSJC97U6+vrTfOmzv9qJDs6OuLu7q6lMBre/sLCwhg5ciSBgYHEx8dfUO2prKysU9WerKyssLGzwcrZiibXJmr8aiAchg8ZTnRoNMvCljEjaAZWFp3btGUKDh8+zOrVq9mzZw+lpaUdCiiGRdeWlpbLBl97e3t8fX0JCwtj4sSJzJ07l1mzZrW5Sa21FTZuhBdegNOnYcQINfjfeCN01aanvr6eHTt28J///Efz+6+urjbpHgsLCwtNMFxdXfHy8tIEw9DHRo0aRWBgoFFHURebPRYWFlJYWHhJgSxDPQ/DLvh2xESKgKRj1NTUcPz4cY4dO8bJkyc5ffo0OTk5VFRUmDSow//e0gxvL+cH9uDgYO1DFxQUpH3oioqKtLnpxMRECgoKqKmp6XC1J0MNZXd3dwYPHszkyZP51a9+dcXCNenl6cQmxxKTEsOPmT/S1NqEs40zUUOjWBa2jCXDluDlYJxNU8agpKSEN954gw0bNpCVldWhuXuD572VlRVNTU3U1dVd8v/d1tYWX19fhg0bxsSJE5k5cy6FhXN5+WVrkpMhPByeeQauuw56yrk7NTWVb7/9lh9//JEzZ85QWFjYI4VhDHbSjo6OmmD4+fkRFBRESEgIw4cPZ+TIkfj5+fWYLcj5Zo+jRo2SIjAQEUJQUVGhvaWfPn2axMRE8vLyTP6mbkCn013w4bg4sIeGhhIeHn5BYDfQ0tLC9u3biY2N5dixY52u9mRYtHV2dsbf35/Ro0ezYMECVqxY0aFqTB2hpqmGnek7iU2OJTYllvyafBQUpg6aqo0SxvmM63VZJS0tLXz77be89dZbnDhxgurq6g6dZyhwDurbeXV1Na2tCvAbVFfPYShKAu7u7zBmTCoTJkQwZ84c5s2bh4ODg8mepzM0NDSwZ88eNm/ezJEjR8jKyqKioqLH0nstLS21z4RhFOvv709QUBBDhw4lNDSU0aNHG233Ncg1gX6DEIKSkhKOHj3KiRMnOHPmDMnJyeTm5mpz6oZayabk/GGym5ubtmgVFBSkBfaRI0cSHBx8xbee06dP880331xQ7ckwTdNevzNkadjb2+Pp6cmwYcO46qqruO666xgxwvgbwTqCXug5XnBcs7I4nHcYgACnAE0Q5oXMw8G6dwTDy3Hy5EleeeUV4uLiKCgoaOf/gxXwW+BJIAQbm5O4uPyD1tbNVFaWXzJVYzDLGzJkCOPHj2fWrFksXLhQE5Xu8HxmJgDPBAd3+1oGzp49y5YtW4iLi9NGmbW1td3OLju/b7bXzw2lJZ2cnHBzc8Pb2xt/f38GDx7MkCFDGD58OOHh4bhdbKZ06T2lCPRG9Ho9BQUFxMfH88svv3DmzBnS0tK0oG4IiD1hl3y5+U8fHx9t/rOjgd1AZWUlmzdvJi4ujpMnT5Kfn6+ZlHXkeQztcXV1JSgoiHHjxrF8+XIWLFjQZ/z7C2oKtGpq29O2U9NUg42FDfNC5hEdGk10WDTBrsHmbma7VFVVsXbtWj755BPS0tJoaBDA74CVQBBwEFgFbNXOURQFNzc3fH19sbGxoampidLSUkpLSy+ZjrKyssLT05OQkBAiIiKYOXMmixYt6lSpUHNluTU3N7N//37+85//cPDgQTIyMqioqDDKwrZOp9M+a0KIdsXHysoKOzs7nJ2dtVrkhpTaVatWSRHoCVpbW8nNzeXIkSMkJCSQmppKeno6ubm5VFZWatMvPfU3vTiwn58JYQjsf9XrsfX3Z8/EiR2+bmtrq9bxjxw5QmZmprbY2pG3JMNiq6OjIz4+PowaNUozIPPx8enOI/damlqb2JO1R1tLSC1LBSDcK5xlYctYFraMaYOmYanrvQVO6uvhn/+El1+GvDwYObIMO7tXSE9/n4qK8g5dw9bWlkGDBuHj44O1tTW1tbXk5uZSXFx8yXSMpaUlHh4emr/SjBkziIqKumwf6e2pznl5eWzbto0dO3ZoGWjGsDI/P9UU1BfLNvbCSBHoCk1NTWRlZWmpjKmpqWRmZpKXl3fBm3pP/o2uFNjPT3Hr6Bv75T48Z8+e5ZtvvtEWW4uKiqitre2wgBnmPD08PAgJCWHKlCmsWLGCKVOm9DqvfHORXJqsWVnsydpDi74FN1s3loQuITo0msXDFuNu1/E3YVNSWwvvvw9r1kBhIcyaBc89B3PnwsWzbqmpqbz++uvExsaSn5/foUVoRVHw9PQkNDQULy8vLC0tKSkpIT09naKiokvqFhjM9wYPHsyYMWO46qqrWB8QgI2PT68VgY7Q3NzM4cOHiYmJ4aeffiI9PZ3S0lLq6+uNEWOkCIC6IJSenk58fDynT58mPT2drKwsLagbdmka45k7M+/XkcBumIrpboH0hoYGtm3bxrZt2zhx4gTH0tJoqalB6eACsU6nw9raGmdnZwICAhgzZgxRUVEsX74cJ2n12CUqGyqJS48jJlk1vCuuK0an6Lgq8CptLWGUV9dsILpDTQ28+y68+ioUF8P8+Wq2z+zZnbtObW0tH3/8MR988AFJSUnU1tZ26DxbW1sGDx5MeHg4vr6+6PV6UlJSSE1N1bJ+LkCnw8PNjaCgIMaMGcP06dOJiooi2IjrBL2B4uJidu7cydatWzlx4gT5+fkd2bHdP0WgtraW1NRULahnZGSQk5Oj/VGM+aZuqHMLalA3fLVFW4H9/F2Mhjf27gZ2A0IIjh8/zubNmzlw4ACpqamUlpZqC8YdeUbDYqvBgGz69OnccMMNWoUniWnRCz2Hcw9ro4RjBepIbbDLYG3aaE7wHGwtu27X3B5VVfD22/D661BaClFRavC/6irj3UMIwe7du3nzzTfZu3cv5eXlHX4J8fDwICwsjAkTJuDn50dlZSX/2rePmqwsOGcPcvE5Li4uBAYGEh4eTmRkJIsWLWL48OHGe6BeRktLCydOnCA2Npa9e/eyc+fO/iECOp1OQPtv1h28lra13mDm1d5uPUO6oyGwG/xMLjcVY6zAfj6lpaVs2rSJXbt2cerUKfLz86murqapqanDOfG2tra4ubkRGBjIhAkTuPrqq5k9e3afWWwdaORU5WjV1Ham76SuuQ57K3sWDFmgLi6HRhPgHGCUe1VUwD/+AX//u/rv6Gg1+E+dapTLd4js7GzefvttNm3aRHZ2dofTNm1tbbURwLRp0wgICCApKYnDhw+TnJxMfn7+JSMQRVFwcXHR/JWmTZvGokWLCA8P73cvPP0mO0hRlMs2xvCWbmFhcYFDY2tra7vFyA0blAx57BcHdsMbe0hIiEkCu4GWlhZ2797Nli1biI+PJysri/LychoaGjq12Ork5ISvry/h4eHMnTuXa665Bm9vb5O1W9JzNLQ08GPmj1oKalZlFgARvhGaLfZk/8mdrqZWVgZvvAFvvqmOAn71KzX4dyJXwKQ0NjbyxRdfsH79ehISEqipqemwnYe7uzuhoaFMmjSJefPm4e7uzt69ezl06BBnzpzRFmjPx2C+FxAQwIgRI5g6dSoLFy4kIiKiz65p9RsRsLCwEJaWlu2mTbYV2C82lBoyZIhJA/v5pKWl8c0337Bv3z6SkpK0xdbOGJDZ2dnh4eHB0KFDmTp1Ktdccw3jx4/vVsd8Ij0dgJeGDOnyNSQ9jxCC08WntWmj/Wf3oxd6vOy9WBq6lOjQaBYNXYSLbdub4kpK1Cmft95S5/+vvVa1d4iI6MEH6SJCCA4cOMC7777Lrl27KCwqQnQwldqQoRQeHs6MGTOIjo6mvr6enTt38vPPP3PmzBlyc3Oprq6+5LPp5OSEn5+fZr43f/58Jk+e3GNxpKv0GxHQ6XTC09PzAuMncwZ2A7W1tcTGxrJt2zYSEhK0lNCO5sQbFlsNQ9Nx48axePFilixZgqOJSyv19tQ6Sccoqy9je+p2YlJi2JayjfKGcix1lswMmqmtJYR5hAFqhs9rr6mLvnV1cMMNavAfPdrMD9ENDP34Sz8/1q5dy1dffUV6evqli8dtoCgK7u7uDB06lEmTJjF//nwWLVpEZmYmO3bs4Oeff9YsVKqqqi5rvufr68uIESO08yMjIy/rWmoO+o0ImGOfQGtrK0ePHmXTpk0cPHjwgrStzhiQOTg44OXlxfDhw5k1axbXXHMNISEhZp97lCLQ/2jRt/Bzzs/aKOFk0UkAgi0icTv6N05vm0lzk46bb1Z46ikYOdLMDTYCV+rHTU1NbNq0iX/+85/Ex8dTWVnZ4XVFW1tb/P39GTVqFNOnT2f58uWEh4eTnJysicOpU6fIzs6msrLykpc+e3t7fHx8NPO9efPmMXPmTKytrbv/0J1AisAVKCgoYNOmTfzwww+cOnWKwsJCampqurTYGhwczMSJE1m+fPkVXRZ7E1IE+j8/n8ph5V/L2Lt5BPpWHYz5FIf5/yBqagjLQpexNHQpPo59e5NeZ/uxoTDO2rVr+e6778jPz+9UER83NzdCQkK0wL5kyRKcnZ1JT09nx44dWkEjg9/Vxet6tra2+Pj4aOZ7c+bMYc6cOdjZ2XXuwTvIgBWB5uZmdu7cSUxMDMeOHdOMohobGzu82GrIiff19dUMyK6++mo8PT278yi9BikC/ZesLFi9Gj78UPX2/+1v4aFH6sjU7dJGCbnVuQBM9p/MsrBlRIdGM95vPDqlby2AGqsfl5WVsX79er788kuSkpI6tVHLxsYGPz8/Ro4cSWRkJMuWLdMquZ09e5YdO3awf/9+fvnlF7KysigrK7skDtnY2ODt7c3QoUOZMGECs2fPZt68ed2eFu7VIqAoymLgTdRaTuuFEKvbOvZ8ERBCcObMGTZt2sS+ffs0AzJDTnxnqj0ZDMimTZvGihUrGDNmTK9f6DEWvzl9GoBPR40yc0skxiI9HV56CT76SN3R+7vfwcqVcPGeKSEEJwpPaFYWB3MOIhD4OfqxNHQpy8KWsWDIAhytTbsuZQxM2Y9bW1v573//ywcffMCBAwc6vKcB/peGGhISwvjx45k3bx5Lly7VDN8KCgqIi4tj7969JCQkkJmZSWlp6RXN9yIiIpg1axYLFizA1dW1o+3onSKgKIoFkAwsBHKAw8DNQojTlztep9MJQ/pnexhy+g2LrREREURHRzN//nyTL7ZKJOYgJQX+9jf497/V4i2//z08/jgEBnbs/OLaYralqoZ336V+R1VjFdYW1swJnqOloA5xk1lkBk6dOsW6deuIiYnh7NmzHayxrGJtba0tJEdGRrJ06dILSnyWlpZq4nD8+HEyMjIoKSm5rPmeh4cHQ4YMYdy4ccycOZOFCxdeMlPRm0UgEvg/IUTUue+fABBCvNTG8eLcfzUDMm9vb0aMGMHs2bO55pprCAwMNPtiq0TSk5w5Ay++CJ9/DtbWcM898Nhj4O/f9Ws2tzazL3sfsSmxxCTHkFSaBMBIz5GalcX0wOm9oppab6KqqopPPvmEzz77jJMnT1JbW9vh6SRFUXB2diY4OJhx48Yxd+5coqOjL6grUFlZyc6dO9mzZw/Hjh0jPT29TfM9d3d3zXxv/fr1vVYErgMWCyF+f+7724CpQoj7L3f8xIkTRXx8vMnaMxB5OCUFgDdCQ83cEklnOXVKLeG4YQPY2cF998Ejj4Cvr/HvlVqWqk0b7c7cTbO+GVdbV62a2uJhi/G0N986WW/ux3q9nri4OD788EN2795NSUlJp2oSWFlZaRlGkZGRLF68mOnTp1+wP6i2tpZdu3axe/dujh07RmpqKsXFxeenyPZaEbgeiLpIBKYIIR4475i7gbsBgoKCJmZlZZmsPQMRuTDc9zhxQg3+X38Njo5w//3w5z+DEQtRXZHqxmri0uO0amqFtYXoFB3TBk3Tpo3GeI/p0RF5X+zHaWlprF+/ns2bN5ORkXGJW2p7ODs7a7U2Zs+ezdVXX32JzXZDQwM//PADS5cu7bUi0KnpoN5eT6Av0hc/PAOVo0fh+edh82ZwdoYHH4SHHwYPD/O1SS/0xOfFa9NG8fnqSD3QOfCCamp2VqZJfTTQX/pxXV0dn3/+OZ999hlHjx697K7lK2FlZYWXlxdhYWFMnTqVqKgoZs6ciZWVVa8VAUvUheH5QC7qwvAtQohTlzteioDx6S8fnv7MoUNq8I+JAVdXNfA/+CC0U1HQLORX52uGdzvSdlDbXIutpS3zQ+Zr1dSCXIKMft/+3I/1ej379+9n/fr1qkVGYWFXitF0WQRMuudZCNGiKMr9wHbUFNEP2xIAiWSgceAArFoF330H7u7qFND994NL23ZAZsfPyY+7JtzFXRPuorGlkT1Ze1TDuxR1XwJbYazPWG2UMDVgaqcN7wYaOp2OmTNnMnPmzAt+npOTwwcffMCmTZtITk6mvr7eJPfv05vFJO1zd5Ka9bGuH3ur9zX27FHf/HfuBE9PePRRddG3L9fsEUKQVJqkbVLbm7WXVtGKh52HVk0tamgUbnZdG97IfqxSX1/P119/zWeffcbhw4cpLy83TCf1zumgziJFQNJfEQJ+/BH++lfYvRt8fNQ0z3vuAQcHc7fO+FQ0VLA9dTuxKbFsTdlKaX0pFooFM4JmaKOEEZ4jZLq3ERBCoNPppAhIJL0RIdQ3/lWrYN8+8PNTN3j94Q9gb2/u1vUMrfpWDuUe0kYJJwpPABDiGqI5oM4ePBsbS1n4qKv02s1inUWKgPGRw2jzIARs26YG/4MHYdAg1drhrrvA1nSVI/sEZyvPEpuipp/uSt9FfUs9DlYOLBy6kOjQaJaGLsXf6cKdcLIfX5nuiEDvMMOWmIzki+qxSkyLEPDf/6rBPz4eBg+G999Xzd1khU+VQJdA7pl0D/dMuof65np+yPxBGyVsPrMZgAl+E7Q9CZP8J8l+bEKkCEgkRkCvV/P7n38ejh+HIUPggw/gttugDziOmw07KzuWhi5laehShBCcLDqp7Ul4Ye8LrNqzCh8HHxT3qbj7zKRq1FCcbZzN3ex+hZwO6uf05/zq3kBrK3zzjRr8T56E0FC1itctt6gmb5KuU1pXynep3xGTEsPXSbG0NFdjpbNi1uBZ2lrCMPdh5m5mr0CuCUjaRIqAaWhtVT19XngBEhNhxAi1ePuNN8IAcSnvUWbHH6ay/ASLRDKxKbGcLlaNiMM8wrRpoxlBM7C26NmKXr0FuSYgaZMIaattVFpaVDfPF1+E5GS1bu+GDWoRdxn8Tcd4Z1dwns2a0N+zZuEaMsoztGmjtw+/zes/v46zjTNRQ6OIDo1mSegSvB28zd3sPoEcCUgkHaC5WfXxf/FFtajLuHHw7LPw61+Drm8V6ep31DTVsCt9lyYK+TX5KChMCZiiVVOL8I3o13sS5HSQRGIiGhvh44/VYi5ZWTBxohr8ly9Xq3pJehdCCI4VHNNssQ/nHkYgCHAK0LyN5ofMx8G6f+3QkyIgaRNZXrJrNDSotXtXr4azZ2HqVDX4L1kig7856Go/LqwpZFvqNmKSY9iRtoPqpmpsLGyYGzJXW0sIdg02QYt7FrkmIGmTnE56mA906uth3TpYswby8uCqq2D9eli4UAZ/c9LVfuzj6MMdEXdwR8QdNLU2sTdrrzZtdP+2+7l/2/2Ee4VrVhaRgZFY6gZWWBxYTyuRtEFtLaxdC6+8AoWFMHs2fPopzJkjg39/wdrCmvlD5jN/yHxej3qd5NJkbdro9Z9fZ81Pa3CzdWPxsMVaNTV3O3dzN9vkSBGQDGiqq+Hdd+HVV6GkBObPV7N9Zs82d8skpibMI4ywyDD+FPknKhsq1WpqKbHEJsfyxckv0Ck6pgdO16aNwr3C++XishQByYCkshLefhtefx3KymDxYjXPf/p0c7dMYg5cbF24btR1XDfqOvRCz5G8I2qdhOQYVu5aycpdKxnsMlibNpobMhdby/5hAiVFoJ8T2ZsrlJiB8nL4xz/gjTegogKWLVOD/5Qp5m6Z5Er0ZD/WKTqmBExhSsAUVs1dRW5VrlZN7aMTH/HukXext7Jnfsh8LQU1wDmgx9pnbGR2kGRAUFqqBv5//AOqqtT8/qefVlM+JZKO0tDSwO7M3Vo1tcyKTAAifCO0UcJk/8k9Xk1NpohKJG1QXKxO+bz9NtTUwHXXqcF/3Dhzt0zS1xFCkFiSqDmg7s/eT6toxcveiyWhS1gWuoxFQxfhYmv6UYwUAUmbXHvyJADfjB5t5pb0LIWF6mLvu++qaZ833ghPPaXaPEj6Hn2hH5fVl2nV1LalbqOsvgxLnSUzg2Zqo4QwjzCTLC7LfQKSNiltbjZ3E3qUvDw1zXPtWmhqUt08n3pKNXiT9F36Qj92t3Pn5jE3c/OYm2nVt/Jzzs/aKOHRuEd5NO5RhroN1RxQZw2e1SsM76QISPoFOTnw8svwz3+qJm+33QZPPqlaO0skPY2FzoKrgq7iqqCreGnBS2RVZLE1ZSsxKTG8H/8+bx58E0drRxYNXaRVU/N19DVLW6UISPo0WVmqtcOHH6qFXe64A554Qi3qIpH0Fga7Dubeyfdy7+R7qWuu4/uM77VRwreJ3wIw2X+yNm003m88OqVnnAmlCEj6JOnpqqnbxx+rO3rvukut4Tt4sLlbJpFcGXsre21KSAhBQmGCJgh/3f1X/m/3/+Hr6KsJwoIhC3C0Np0lvBSBfs58NzdzN8GopKSods6ffqpW7rrnHvjLXyAw0Nwtk5iS/taPDSiKwjjfcYzzHcdTs56iuLb4f9XUTn/NB8c+wNrCmjnBc1QX1NBohroPNW4bTJUdpCjK/wF/AIrP/ehJIcTWK50js4MkbZGYqAb/L75QC7b/v/8Hjz0G/v7mbplEYhqaW5vZf3a/5m90puQMACM8R2hWFlcFXoWVhVXvTBE9JwI1QohXO3qOFAHJxZw8qZZw3LgR7Ozgj3+ERx4BHx9zt0wi6VnSytI0B9TdWbtpam3CxcaFxcMWs+H6DTJFVHJ5liQkALBt7Fgzt6RznDihFm//5htwdFTn+//0J/DyMnfLJOagr/ZjYzLUfSgPTn2QB6c+SHVjNTvTd6qGdymx3bquqUXgfkVRbgeOAI8IIcpNfD/JRdS3tpq7CZ0iPl4N/v/5Dzg7q74+Dz0EHh7mbpnEnPS1fmxqnGycWDFyBStGrkAv9Fg82nWbim7lICmKslNRlJOX+foV8B4wFIgA8oHX2rjG3YqiHFEU5UhxcfHlDpEMAA4eVM3cJk2C3bvhr39V0z9XrZICIJFcie6mknZrJCCEWNCR4xRF+ScQ08Y11gHrQF0T6E57JH2Pn35SA/327eDuri7+3n+/OgqQSCSmx2S7ERRF8Tvv2xXASVPdS9L32LMHFixQyzceParu9s3MVHf5SgGQSHoOU64JrFEUJQIQQCbw/0x4L0kbLOtFcylCwA8/qG/+u3erGT6vvaamezo4mLt1kt5Mb+rH/Q3pIioxOUJAXJwa/PfvV3P7H38c/vAHNe1TIpF0j+7sE+gZcwrJgEQI2LoVIiMhKkpd6H3nHUhLgwcflAIgkfQGpAj0c+YcO8acY8d69J5CqCmekydDdDQUFMD770NqKtx3H9j2j9Kskh7EHP14oCBFQGI09Hp1c9f48Wr5xooK1d0zJQXuvlu1e5BIJL0LKQKSbtPaChs2wNixavnGujrV3fPMGbjzTrCyMncLJRJJW0gRkHSZlhb47DO1ZONNN6kjgc8+U83ebr9ddfmUSCS9GykCkk7T0qK+6Y8aBb/5jRrsN25Uzd5uuQUsur6DXSKR9DDyXa2fc4O3t9Gu1dQE//63WswlPR0iIuDbb+FXvwKdfJ2QmBBj9mPJhUgR6OfcFxDQ7Ws0NsJHH8FLL6lpnpMmwRtvqF4/itL9Nkok7WGMfiy5PPL9rZ9T19pKXRcdGBsa1Lz+YcPUCl6+vmre/6FDsHy5FABJz9Gdfiy5MnIk0M9Zes6H/cfx4zt8Tl0d/POfqp9Pfr7q7/Phh6rXjwz8EnPQlX4s6RhSBCQatbWwdi288goUFsKcOWq2z5w5MvhLJP0VKQISqqvVaZ/XXoOSEvWNf+NGmDXL3C2TSCSmRorAAKayEt56C/7+dygrg8WL1Upe06ebu2USiaSnkCIwACkvhzffVL8qKtRF3qefhilTzN0yiUTS00gR6Ofc4eur/bu0VH3rf+stqKqCFSvU4D9hghkbKJF0gPP7scS4SBHo59zh50dxMaxcqc7719aq/j5PP616/UgkfYE7/PzaP0jSJaQI9GMKCuD5l1v5aJ2O+nqFm26Cp56C8HBzt0wi6RwlTU0AeFpbm7kl/Q8pAv2QvDxYs0b18G9o0uGzpJz4V90ZMcLcLZNIusZ1p04Bcp+AKZAi0I84e1bd4LV+vWrydvvtcOpXp7ELamLECHdzN08ikfRCpG1EPyAzU2czTncAAA2TSURBVLV1GDpUffu//XZITlZ3+doFNZm7eRKJpBcjRwJ9mLQ01dTt449VF8/f/14t4D54sLlbJpFI+gpSBPogycnw4ouqpYOlJdx7L/zlLzBokLlbJpFI+hpSBPoQiYlq8P/iC7Ve74MPwmOPwZWy5+6VFrySfoDsx6ZDikAf4Jdf4IUX4KuvwN4eHnlE/fLxaf/cG2UxDkk/QPZj09GthWFFUa5XFOWUoih6RVEmXfS7JxRFSVUUJUlRlKjuNXNgcvw4XHutuqlr2zZ44gl1EXjNmo4JAMDZhgbONjSYtJ0SiamR/dh0dHckcBK4Bnj//B8qijIKuAkIB/yBnYqihAkhZFWIDnDkCDz/PGzZAi4u8Oyz8NBD4N6FLM/bEhMBmV8t6dvIfmw6uiUCQohEAOVSs/lfAV8KIRqBDEVRUoEpwIHu3K+/c/AgrFqlVu9yc1P//cAD4Opq7pZJJJL+iqnWBAKAn8/7PufczySXYf9+NeDv2AEeHmoh9z/+EZydzd0yiUTS32lXBBRF2QlczsLvKSHEf9o67TI/E21c/27gboCgoKD2mtOv2L1bDf7ffw9eXupc/733gqOjuVsmkUgGCu2KgBBiQReumwMEnvf9ICCvjeuvA9YBTJo06bJC0Z8QQg36q1bBnj1q8fbXX4e77wYHB3O3TiKRDDRMNR20BfhcUZTXUReGQ4FDJrpXn0AIdbpn1Sr46Sfw94d//EPd5WtnZ7r7PhIY2P5BEkkvR/Zj09EtEVAUZQXwFuAFxCqKclwIESWEOKUoykbgNNAC/HGgZgYJoS70rloFhw5BYCC8+y7ceSfY2pr+/ss9PU1/E4nExMh+bDq6mx20CdjUxu9eBF7szvX7MkKoKZ6rVsHRoxAcDOvWwW9/Cz1piZ5UVwfAcHv7nrupRGJkZD82HXLHsJHR6+Hbb9UdvidOqM6eH34Iv/kNWFn1fHv+X1ISIPOrJX0b2Y9Nh7SSNhKtrfDll+ru3uuvh/p6+OQTOHNGnfoxhwBIJBJJe0gR6CYtLfDpp2rJxptvVkcCn38Op0/DbbepLp8SiUTSW5Ei0EWam+Gjj2DkSDXYW1vDxo1w8qQqBhYW5m6hRCKRtI98T+0kTU3qNM/f/gYZGTB+PGzaBFdfrRZ2kUgkkr6EFIEO0tgI//qXWskrOxsmTVLz/KOj4VLrpN7D07LMmKQfIPux6ZAi0A4NDWrh9tWrITcXpk1T6/hGRfXu4G9gQVesRyWSXobsx6ZDikAb1NWpef1r1kB+PsyYoa4BzJ/fN4K/gePV1QBEODmZuSUSSdeR/dh0SBG4iJoaWLsWXnkFiopg7lw122f27L4V/A08nJoKyPxqSd9G9mPTIUXgHNXV8M478NprUFICCxfCM8/AzJnmbplEIpGYjgEvApWV8NZb8Pe/Q1kZLFmiBv/ISHO3TCKRSEzPgBWB8nJ44w14801VCJYvV4P/5MnmbplEIpH0HANOBEpL1bf+f/xDnQJasUIN/nKqUSKRDEQGjAgUFanz/e+8o2b+XH89PP00jBlj7paZlr8NGWLuJkgk3Ub2Y9PR70WgoEDN9HnvPXXD1003wVNPwahR5m5ZzzDdxcXcTZBIuo3sx6aj34pAbq6a479unWr18JvfwJNPwvDh5m5Zz/JTZSUgP0SSvo3sx6aj34lAdja8/LK6y1evh9tvhyeegGHDzN0y8/Bkejog86slfRvZj01HvxGBzEzV1+df/1K/v/NOWLkSQkLM2iyJRCLp1fR5EUhLUx09P/lEdfH8wx/g8cchKMjcLZNIJJLeT58VgaQkNfh/9plateu+++Avf4GAAHO3TCKRSPoOfU4ETp+GF19USzna2MBDD8Gjj/L/27vfGCuuMo7j31+pQAh/w9qUWBAaoeFPiVZC2jdVQ6MNUYhNVUwaW20kFOUFEqMNSW3AvrFpNEZrwdigjVr+NBTQEixarTFuBUNKKRUC2BYQslIUX7Si4OOLmXY3ZHfv7M7OzN6Z3ychmd2Ze+6Th3Pvs3PmzBmmTKk6MjOz9tM2ReCll5KHt2/dCmPGJF/8a9bANddUHdnw9p2mXhG3WnE/Ls6wLwIHDsD69cnTu8aNS2b6rF4NHR1VR9YevPSu1YH7cXGGbRHYty/58t+1CyZMgAceSIZ+/GyJgdl7/jzgh3JYe3M/Lk6uIiDpU8CDwGxgYUTsT38/HXgFOJIe2hkRK7K02dkJ69bB7t0waVKyvWoVTJyYJ9Lm+uZrrwH+8Fh7cz8uTt4zgUPAHcCGXvYdj4j3D6Sxo0eTJZwnT07m/K9cCePH54zQzMz6lKsIRMQrABqiR2699Vayzs+KFTB27JA0aWZm/biqwLZnSDog6XeSMj2f68Ybk1k/LgBmZuVoeSYgaS9wbS+71kbEjj5edgaYFhFvSPog8LSkuRHxr17aXw4sB5jm23zNzErVsghExG0DbTQiLgIX0+0/SzoOzAL293LsRmAjwIIFC2Kg72X929C0ZVOtltyPi1PIFFFJ7wbOR8RlSdcDM4ETRbyX9e+GMWOqDsEsN/fj4uS6JiDpk5JOAbcAv5S0J911K3BQ0ovANmBFRJzPF6oNxq5z59h17lzVYZjl4n5cnLyzg7YD23v5/VPAU3natqHxyMmTAHzCt1hbG3M/Lk6Rs4PMzGyYcxEwM2swFwEzswZzETAza7Bhu4qoDY0nZs+uOgSz3NyPi+MiUHNTR4+uOgSz3NyPi+PhoJrb3NXF5q6uqsMwy8X9uDg+E6i5H5w+DcBn/BxOa2Pux8XxmYCZWYO5CJiZNZiLgJlZg7kImJk1mC8M19y2uXOrDsEsN/fj4rgI1FzHyJFVh2CWm/txcTwcVHObzpxh05kzVYdhlov7cXFcBGpu09mzbDp7tuowzHJxPy6Oi4CZWYO5CJiZNZiLgJlZg7kImJk1mKeI1twz8+dXHYJZbu7HxXERqLkxI0ZUHYJZbu7HxfFwUM09evo0j6bL8Jq1K/fj4rgI1NyWri62+GEc1ubcj4uTqwhIeljSXyQdlLRd0sQe++6XdEzSEUkfyx+qmZkNtbxnAs8C8yJiPnAUuB9A0hxgGTAXuB14VJIH9czMhplcRSAifhURl9IfO4Hr0u2lwJMRcTEi/gocAxbmeS8zMxt6Q3lN4AvA7nT7PcDJHvtOpb8zM7NhpOUUUUl7gWt72bU2Inakx6wFLgE/fftlvRwffbS/HFie/nhR0qFWMTVEB3BuqBrr7T+kjQxpLtpco3NxRT9udC6ucMNgX9iyCETEbf3tl3Q38HFgUUS8/UV/Cpja47DrgL/10f5GYGPa1v6IWJAh7tpzLro5F92ci27ORTdJ+wf72ryzg24HvgYsiYg3e+zaCSyTNErSDGAm8Kc872VmZkMv7x3D3wNGAc9KAuiMiBUR8bKkLcBhkmGiL0XE5ZzvZWZmQyxXEYiI9/Wz7yHgoQE2uTFPPDXjXHRzLro5F92ci26DzoW6h/HNzKxpvGyEmVmDVVIEJN2eLidxTNLXe9k/StLmdP8LkqaXH2U5MuTiK5IOp0tz/FrSe6uIswytctHjuDslhaTazgzJkgtJn077xsuSflZ2jGXJ8BmZJuk5SQfSz8niKuIsmqTHJXX1NY1eie+meToo6aZMDUdEqf+AEcBx4HpgJPAiMOeKY1YCj6Xby4DNZcc5jHLxEWBMun1fk3ORHjcOeJ7kDvUFVcddYb+YCRwAJqU/X1N13BXmYiNwX7o9B3i16rgLysWtwE3AoT72Lya5YVfAzcALWdqt4kxgIXAsIk5ExH+AJ0mWmehpKfDjdHsbsEjp9KOaaZmLiHguuqff9lyao26y9AuA9cC3gH+XGVzJsuTii8D3I+IfABFR1yU2s+QigPHp9gT6uCep3UXE88D5fg5ZCvwkEp3ARElTWrVbRRHIsqTEO8dEsjbRBWByKdGVa6DLa9xL99IcddMyF5I+AEyNiF+UGVgFsvSLWcAsSX+Q1Jnes1NHWXLxIHCXpFPAM8CqckIbdga1XE8VTxbLsqRE5mUn2txAlte4C1gAfKjQiKrTby4kXQV8G7inrIAqlKVfXE0yJPRhkrPD30uaFxH/LDi2smXJxWeBTRHxiKRbgCfSXPyv+PCGlUF9b1ZxJpBlSYl3jpF0NckpXn+nQe0q0/Iakm4D1pLcmX2xpNjK1ioX44B5wG8lvUoy5rmzpheHs35GdkTEfyNZqfcISVGomyy5uBfYAhARfwRGk6wr1DSZl+vpqYoisA+YKWmGpJEkF353XnHMTuDudPtO4DeRXvmomZa5SIdANpAUgLqO+0KLXETEhYjoiIjpETGd5PrIkogY9Jopw1iWz8jTJJMGkNRBMjx0otQoy5ElF68DiwAkzSYpAn8vNcrhYSfwuXSW0M3AhYg40+pFpQ8HRcQlSV8G9pBc+X88kmUm1gH7I2In8COSU7pjJGcAy8qOswwZc/EwMBbYml4bfz0illQWdEEy5qIRMuZiD/BRSYeBy8BXI+KN6qIuRsZcrAF+KGk1yfDHPXX8o1HSz0mG/zrS6x/fAN4FEBGPkVwPWUzy/JY3gc9nareGuTIzs4x8x7CZWYO5CJiZNZiLgJlZg7kImJk1mIuAmVmDuQiYmTWYi4CZWYO5CJiZNdj/AfYEjbWN5IUkAAAAAElFTkSuQmCC\n", "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "plot_pomdp_utility(utility)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "## Appendix\n", "\n", "Surprisingly, it turns out that there are six other optimal policies for various ranges of R(s). \n", "You can try to find them out for yourself.\n", "See **Exercise 17.5**.\n", "To help you with this, we have a GridMDP editor in `grid_mdp.py` in the GUI folder. \n", "
\n", "Here's a brief tutorial about how to use it\n", "
\n", "Let us use it to solve `Case 2` above\n", "1. Run `python gui/grid_mdp.py` from the master directory.\n", "2. Enter the dimensions of the grid (3 x 4 in this case), and click on `'Build a GridMDP'`\n", "3. Click on `Initialize` in the `Edit` menu.\n", "4. Set the reward as -0.4 and click `Apply`. Exit the dialog. \n", "![title](images/ge0.jpg)\n", "
\n", "5. Select cell (1, 1) and check the `Wall` radio button. `Apply` and exit the dialog.\n", "![title](images/ge1.jpg)\n", "
\n", "6. Select cells (4, 1) and (4, 2) and check the `Terminal` radio button for both. Set the rewards appropriately and click on `Apply`. Exit the dialog. Your window should look something like this.\n", "![title](images/ge2.jpg)\n", "
\n", "7. You are all set up now. Click on `Build and Run` in the `Build` menu and watch the heatmap calculate the utility function.\n", "![title](images/ge4.jpg)\n", "
\n", "Green shades indicate positive utilities and brown shades indicate negative utilities. \n", "The values of the utility function and arrow diagram will pop up in separate dialogs after the algorithm converges." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" }, "widgets": { "state": { "001e6c8ed3fc4eeeb6ab7901992314dd": { "views": [] }, "00f29880456846a8854ab515146ec55b": { "views": [] }, "010f52f7cde545cba25593839002049b": { "views": [] }, "01473ad99aa94acbaca856a7d980f2b9": { "views": [] }, "021a4a4f35da484db5c37c5c8d0dbcc2": { "views": [] }, "02229be5d3bc401fad55a0378977324a": { "views": [] }, "022a5fdfc8e44fb09b21c4bd5b67a0db": { "views": [ { "cell_index": 27 } ] }, "025c3b0250b94d4c8d9b33adfdba4c15": { "views": [] }, "028f96abfed644b8b042be1e4b16014d": { "views": [] }, "0303bad44d404a1b9ad2cc167e42fcb7": { "views": [] }, "031d2d17f32347ec83c43798e05418fe": { "views": [] }, "03de64f0c2fd43f1b3b5d84aa265aeb7": { "views": [] }, "03fdd484675b42ad84448f64c459b0e0": { "views": [] }, "044cf74f03fd44fd840e450e5ee0c161": { "views": [] }, "054ae5ba0a014a758de446f1980f1ba5": { "views": [] }, "0675230fb92f4539bc257b768fb4cd10": { "views": [ { "cell_index": 27 } ] }, "06c93b34e1f4424aba9a0b172c428260": { "views": [] }, "077a5ea324be46c3ad0110671a0c6a12": { "views": [] }, "0781138d150142a08775861a69beaec9": { "views": [] }, "0783e74a8c2b40cc9b0f5706271192f4": { "views": [ { "cell_index": 27 } ] }, "07c7678b73634e728085f19d7b5b84f7": { "views": [] }, "07febf1d15a140d8adb708847dd478ec": { "views": [] }, "08299b681cd9477f9b19a125e186ce44": { "views": [] }, "083af89d82e445aab4abddfece61d700": { "views": [] }, "08a1129a8bd8486bbfe2c9e49226f618": { "views": [] }, "08a2f800c0d540fdb24015156c7ffc15": { "views": [] }, "097d8d0feccc4c76b87bbcb3f1ecece7": { "views": [] }, "098f12158d844cdf89b29a4cd568fda0": { "views": [ { "cell_index": 27 } ] }, "09e96f9d5d32453290af60fbd29ca155": { "views": [] }, "0a2ec7c49dcd4f768194483c4f2e8813": { "views": [] }, "0b1d6ed8fe4144b8a24228e1befe2084": { "views": [] }, "0b299f8157d24fa9830653a394ef806a": { "views": [] }, "0b2a4ac81a244ff1a7b313290465f8f4": { "views": [] }, "0b52cfc02d604bc2ae42f4ba8c7bca4f": { "views": [] }, "0b65fb781274495ab498ad518bc274d4": { "views": [ { "cell_index": 27 } ] }, "0b865813de0841c49b41f6ad5fb85c6a": { "views": [] }, "0c2070d20fb04864aeb2008a6f2b8b30": { "views": [] }, "0cf5319bcde84f65a1a91c5f9be3aa28": { "views": [] }, "0d721b5be85f4f8aafe26b3597242d60": { "views": [] }, "0d9f29e197ad45d6a04bbb6864d3be6d": { "views": [] }, "0e03c7e2c0414936b206ed055e19acba": { "views": [] }, "0e2265aa506a4778bfc480d5e48c388b": { "views": [] }, "0e4e3d0b6afc413e86970ec4250df678": { "views": [] }, "0e6a5fe6423542e6a13e30f8929a8b02": { "views": [] }, "0e7b2f39c94343c3b0d3b6611351886e": { "views": [] }, "0eb5005fa34440988bcf3be231d31511": { "views": [] }, "104703ad808e41bc9106829bb0396ece": { "views": [] }, "109c376b28774a78bf90d3da4587d834": { "views": [] }, "10b24041718843da976ac616e77ea522": { "views": [] }, "11516bb6db8b45ef866bd9be8bb59312": { "views": [] }, "1203903354fa467a8f38dbbad79cbc81": { "views": [] }, "124ecbe68ada40f68d6a1807ad6bcdf9": { "views": [] }, "1264becdbb63455183aa75f236a3413e": { "views": [] }, "13061cc21693480a8380346277c1b877": { "views": [] }, "130dd4d2c9f04ad28d9a6ac40045a329": { "views": [] }, "1350a087b5a9422386c3c5f04dd5d1c9": { "views": [] }, "139bd19be4a4427a9e08f0be6080188e": { "views": [] }, "13f9f589d36c477f9b597dda459efd16": { "views": [] }, "140917b5c77348ec82ea45da139a3045": { "views": [] }, "145419657bb1401ba934e6cea43d5fd1": { "views": [] }, "15d748f1629d4da1982cd62cfbcb1725": { "views": [] }, "17ad015dbc744ac6952d2a6da89f0289": { "views": [] }, "17b6508f32e4425e9f43e5407eb55ed3": { "views": [] }, "185598d8e5fc4dffae293f270a6e7328": { "views": [] }, "196473b25f384f3895ee245e8b7874e9": { "views": [] }, "19c0f87663a0431285a62d4ad6748046": { "views": [] }, "1a00a7b7446d4ad8b08c9a2a9ea9c852": { "views": [] }, "1a97f5b88cdc4ae0871578c06bbb9965": { "views": [] }, "1a9a07777b0c4a45b33e25a70ebdc290": { "views": [] }, "1af711fe8e4f43f084cef6c89eec40ae": { "views": [ { "cell_index": 27 } ] }, "1aff6a6e15b34bb89d7579d445071230": { "views": [] }, "1b1ea7e915d846aea9efeae4381b2c48": { "views": [] }, "1ba02ae1967740b0a69e07dbe95635cb": { "views": [] }, "1c5c913acbde4e87a163abb2e24e6e38": { "views": [ { "cell_index": 27 } ] }, "1cfca0b7ef754c459e1ad97c1f0ceb3b": { "views": [] }, "1d8f6a4910e649589863b781aab4c4d4": { "views": [] }, "1e64b8f5a1554a22992693c194f7b971": { "views": [] }, "1e8f0a2bf7614443a380e53ed27b48c0": { "views": [] }, "1f4e6fa4bacc479e8cd997b26a5af733": { "views": [] }, "1fdf09158eb44415a946f07c6aaba620": { "views": [] }, "200e3ebead3d4858a47e2f6d345ca395": { "views": [ { "cell_index": 27 } ] }, "2050d4b462474a059f9e6493ba06ac58": { "views": [] }, "20b5c21a6e6a427ba3b9b55a0214f75e": { "views": [] }, "20b99631feba4a9c98c9d5f74c620273": { "views": [] }, "20bcff5082854ab89a7977ae56983e30": { "views": [] }, "20d708bf9b7845fa946f5f37c7733fee": { "views": [] }, "210b36ea9edf4ee49ae1ae3fe5005282": { "views": [] }, "21415393cb2d4f72b5c3f5c058aeaf66": { "views": [] }, "2186a18b6ed8405a8a720bae59de2ace": { "views": [] }, "220dc13e9b6942a7b9ed9e37d5ede7ba": { "views": [] }, "221a735fa6014a288543e6f8c7e4e2ef": { "views": [] }, "2288929cec4d4c8faad411029f5e21fa": { "views": [] }, "22b86e207ea6469d85d8333870851a86": { "views": [] }, "23283ad662a140e3b5e8677499e91d64": { "views": [] }, "23a7cc820b63454ca6be3dcfd2538ac1": { "views": [] }, "240ed02d576546028af3edfab9ea8558": { "views": [] }, "24678e52a0334cb9a9a56f92c29750be": { "views": [] }, "247820f6d83f4dd9b68f5df77dbda4b7": { "views": [] }, "24b6a837fbd942c9a68218fb8910dcd5": { "views": [] }, "24ee3204f26348bca5e6a264973e5b56": { "views": [] }, "262c7bb5bd7447f791509571fe74ae44": { "views": [] }, "263595f22d0d45e2a850854bcefe4731": { "views": [] }, "2640720aa6684c5da6d7870abcbc950b": { "views": [] }, "265ca1ec7ad742f096bb8104d0cf1550": { "views": [] }, "26bf66fba453464fac2f5cd362655083": { "views": [] }, "29769879478f49e8b4afd5c0b4662e87": { "views": [] }, "29a13bd6bc8d486ca648bf30c9e4c2a6": { "views": [] }, "29c5df6267584654b76205fc5559c553": { "views": [] }, "29ce25045e7248e5892e8aafc635c416": { "views": [] }, "2a17207c43c9424394299a7b52461794": { "views": [] }, "2a777941580945bc83ddb0c817ed4122": { "views": [] }, "2ae1844e2afe416183658d7a602e5963": { "views": [] }, "2afa2938b41944cf8c14e41a431e3969": { "views": [] }, "2bdc5f9b161548e3aab8ea392b5af1a1": { "views": [] }, "2c26b2bcfc96473584930a4b622d268e": { "views": [] }, "2ca2a914a5f940b18df0b5cde2b79e4b": { "views": [] }, "2ca2c532840548a9968d1c6b2f0acdd8": { "views": [] }, "2d17c32bfea143babe2b114d8777b15d": { "views": [] }, "2d3acd8872c342eab3484302cac2cb05": { "views": [ { "cell_index": 27 } ] }, "2dc514cc2f5547aeb97059a5070dc9e3": { "views": [] }, "2e1351ad05384d058c90e594bc6143c1": { "views": [ { "cell_index": 27 } ] }, "2e9b80fa18984615933e41c1c1db2171": { "views": [] }, "2ef17ee6b7c74a4bbbbbe9b1a93e4fb6": { "views": [] }, "2f5438f1b34046a597a467effd43df11": { "views": [ { "cell_index": 27 } ] }, "2f8d22417f3e421f96027fca40e1554f": { "views": [] }, "2fb0409cfb49469d89a32597dc3edba9": { "views": [] }, "303ccef837984c97b7e71f2988c737a4": { "views": [] }, "3058b0808dca48a0bba9a93682260491": { "views": [] }, "306b65493c28411eb10ad786bbf85dc5": { "views": [] }, "30f5d30cf2d84530b3199015c5ff00eb": { "views": [] }, "310b1ac518bd4079bdb7ecaf523a6809": { "views": [] }, "313eca81d9d24664bcc837db54d59618": { "views": [] }, "31413caf78c14548baa61e3e3c9edc55": { "views": [] }, "317fbd3cb6324b2fbdfd6aa46a8d1192": { "views": [] }, "319425ba805346f5ba366c42e220f9c6": { "views": [ { "cell_index": 27 } ] }, "31fc8165275e473f8f75c6215b5184ff": { "views": [] }, "329f12edaa0c44d2a619450f188e8777": { "views": [] }, "32edf057582f4a6ca30ce3cb685bf971": { "views": [] }, "330e74773ba148e18674cfa3e63cd6cc": { "views": [] }, "332a89c03bfb49c2bb291051d172b735": { "views": [ { "cell_index": 27 } ] }, "3347dfda0aca450f89dd9b39ca1bec7d": { "views": [] }, "336e8bcfd7cc4a85956674b0c7bffff2": { "views": [] }, "3376228b3b614d4ab2a10b2fd0f484fd": { "views": [] }, "3380a22bc67c4be99c61050800f93395": { "views": [] }, "34b5c16cbea448809c2ccbce56f8d5a5": { "views": [] }, "34bb050223504afc8053ce931103f52c": { "views": [] }, "34c28187175d49198b536a1ab13668c4": { "views": [] }, "3521f32644514ecf9a96ddfa5d80fb9b": { "views": [] }, "36511bd77ed74f668053df749cc735d4": { "views": [] }, "36541c3490bd4268b64daf20d8c24124": { "views": [] }, "37aa1dd4d76a4bac98857b519b7b523a": { "views": [] }, "37aa3cfa3f8f48989091ec46ac17ae48": { "views": [] }, "386991b0b1424a9c816dac6a29e1206b": { "views": [] }, "386cf43742234dda994e35b41890b4d8": { "views": [] }, "388571e8e0314dfab8e935b7578ba7f9": { "views": [ { "cell_index": 27 } ] }, "3974e38e718547efaf0445da2be6a739": { "views": [] }, "398490e0cc004d22ac9c4486abec61e1": { "views": [] }, "399875994aba4c53afa8c49fae8d369e": { "views": [] }, "39b64aa04b1d4a81953e43def0ef6e10": { "views": [] }, "39ffc3dd42d94a27ba7240d10c11b565": { "views": [] }, "3a21291c8e7249e3b04417d31b0447cf": { "views": [ { "cell_index": 27 } ] }, "3a377d9f46704d749c6879383c89f5d3": { "views": [] }, "3a44a6f1f62742849e96d957033a0039": { "views": [] }, "3b22d68709b046e09fe70f381a3944cd": { "views": [ { "cell_index": 27 } ] }, "3b329209c8f547acae1925dc3eb4af77": { "views": [] }, "3c1b2ec10a9041be8a3fad9da78ff9f6": { "views": [ { "cell_index": 27 } ] }, "3c2be3c85c6d41268bb4f9d63a43e196": { "views": [] }, "3c6796eff7c54238a7b7776e88721b08": { "views": [] }, "3cbca3e11edf439fb7f8ba41693b4824": { "views": [] }, "3d4b6b7c0b0c48ff8c4b8d78f58e0f1c": { "views": [] }, "3de1faf0d2514f49a99b3d60ea211495": { "views": [] }, "3df60d9ac82b42d9b885d895629e372e": { "views": [] }, "3e5b9fd779574270bf58101002c152ce": { "views": [ { "cell_index": 27 } ] }, "3e80f34623c94659bfab5b3b56072d9a": { "views": [] }, "3e8bb05434cb4a0291383144e4523840": { "views": [ { "cell_index": 27 } ] }, "3ea1c8e4f9b34161928260e1274ee048": { "views": [] }, "3f32f0915bc6469aaaf7170eff1111e3": { "views": [] }, "3fe69a26ae7a46fda78ae0cb519a0f8b": { "views": [] }, "4000ecdd75d9467e9dffd457b35aa65f": { "views": [] }, "402d346f8b68408faed2fd79395cf3fb": { "views": [] }, "402f4116244242148fdc009bb399c3bd": { "views": [] }, "4049e0d7c0d24668b7eae2bb7169376e": { "views": [] }, "4088c9ed71b0467b9b9417d5b04eda0e": { "views": [] }, "40d70faa07654b6cb13496c32ba274b3": { "views": [] }, "4146be21b7614abe827976787ec570f1": { "views": [] }, "4198c08edda440dd93d1f6ce3e4efa62": { "views": [] }, "42023d7d3c264f9d933d4cee4362852b": { "views": [] }, "421ad8c67f754ce2b24c4fa3a8e951cf": { "views": [] }, "4263fe0cef42416f8d344c1672f591f9": { "views": [] }, "428e42f04a1e4347a1f548379c68f91b": { "views": [ { "cell_index": 27 } ] }, "42a47243baf34773943a25df9cf23854": { "views": [] }, "4343b72c91d04a7c9a6080f30fc63d7d": { "views": [] }, "43488264fc924c01a30fa58604074b07": { "views": [] }, "4379175239b34553bf45c8ef9443ac55": { "views": [ { "cell_index": 27 } ] }, "43859798809a4a289c58b4bd5e49d357": { "views": [] }, "43ad406a61a34249b5622aba9450b23d": { "views": [] }, "4421c121414d464bb3bf1b5f0e86c37b": { "views": [ { "cell_index": 27 } ] }, "445cc08b4da44c2386ac9379793e3506": { "views": [] }, "447cff7e256c434e859bb7ce9e5d71c8": { "views": [] }, "44af7da9d8304f07890ef7d11a9f95fe": { "views": [] }, "45021b6f05db4c028a3b5572bc85217f": { "views": [] }, "457768a474844556bf9b215439a2f2e9": { "views": [] }, "45d5689de53646fe9042f3ce9e281acc": { "views": [] }, "461aa21d57824526a6b61e3f9b5af523": { "views": [] }, "472ca253aab34b098f53ed4854d35f23": { "views": [] }, "4731208453424514b471f862804d9bb8": { "views": [ { "cell_index": 27 } ] }, "47dfef9eaf0e433cb4b3359575f39480": { "views": [] }, "48220a877d494a3ea0cc9dae19783a13": { "views": [] }, "4882c417949b4b6788a1c3ec208fb1ac": { "views": [] }, "49f5c38281984e3bad67fe3ea3eb6470": { "views": [] }, "4a0d39b43eee4e818d47d382d87d86d1": { "views": [] }, "4a470bf3037047f48f4547b594ac65fa": { "views": [] }, "4abab5bca8334dfbb0434be39eb550db": { "views": [] }, "4b48e08fd383489faa72fc76921eac4e": { "views": [] }, "4b9439e6445c4884bd1cde0e9fd2405e": { "views": [] }, "4b9fa014f9904fcf9aceff00cc1ebf44": { "views": [] }, "4bdc63256c3f4e31a8fa1d121f430518": { "views": [] }, "4bebb097ddc64bbda2c475c3a0e92ab5": { "views": [] }, "4c201df21ca34108a6e7b051aa58b7f6": { "views": [] }, "4ced8c156fd941eca391016fc256ce40": { "views": [] }, "4d281cda33fa489d86228370e627a5b0": { "views": [ { "cell_index": 27 } ] }, "4d85e68205d94965bdb437e5441b10a1": { "views": [] }, "4e0e6dd34ba7487ba2072d352fe91bf5": { "views": [] }, "4e82b1d731dd419480e865494f932f80": { "views": [] }, "4e9f52dea051415a83c4597c4f7a6c00": { "views": [] }, "4ec035cba73647358d416615cf4096ee": { "views": [ { "cell_index": 27 } ] }, "4f09442f99aa4a9e9f460f82a50317c4": { "views": [] }, "4f80b4e6b074475698efbec6062e3548": { "views": [] }, "4f905a287b4f4f0db64b9572432b0139": { "views": [] }, "50a339306cd549de86fbe5fa2a0a3503": { "views": [] }, "51068697643243e18621c888a6504434": { "views": [] }, "51333b89f44b41aba813aef099bdbb42": { "views": [] }, "5141ae07149b46909426208a30e2861e": { "views": [ { "cell_index": 27 } ] }, "515606cb3b3a4fccad5056d55b262db4": { "views": [] }, "51aa6d9f5a90481db7e3dd00d77d4f09": { "views": [] }, "524091ea717d427db2383b46c33ef204": { "views": [] }, "524d1132c88f4d91b15344cc427a9565": { "views": [] }, "52f70e249adc4edb8dca28b883a5d4f4": { "views": [] }, "531c080221f64b8ca50d792bbaa6f31e": { "views": [] }, "53349c544b54450f8e2af9b8ba176d78": { "views": [] }, "53a8b8e7b7494d02852a0dc5ccca51a2": { "views": [] }, "53c963469eee41b59479753201626f18": { "views": [] }, "5436516c280a49828c1c2f4783d9cf0e": { "views": [] }, "55a1b0b794f44ac796bc75616f65a2a1": { "views": [ { "cell_index": 27 } ] }, "55ebf735de4c4b5ba2f09bc51d3593fd": { "views": [] }, "56007830e925480e94a12356ff4fb6a4": { "views": [] }, "56def8b3867843f990439b33dab3da58": { "views": [] }, "5719bb596a5649f6af38c11c3daae6e9": { "views": [] }, "572245b145014b6e91a3b5fe55e4cf78": { "views": [] }, "5728da2e2d5a4c5595e1f49723151dca": { "views": [] }, "579673c076da4626bc34a34370702bd4": { "views": [] }, "57c2148f18314c3789c3eb9122a85c86": { "views": [] }, "58066439757048b98709d3b3f99efdf8": { "views": [] }, "58108da85e9443ea8ba884e8adda699e": { "views": [] }, "583f252174d9450196cdc7c1ebab744f": { "views": [] }, "58b92095873e4d22895ee7dde1f8e09a": { "views": [] }, "58be1833a5b344fb80ec86e08e8326da": { "views": [] }, "58ee0f251d7c4aca82fdace15ff52414": { "views": [] }, "590f2f9f8dc342b594dc9e79990e641f": { "views": [] }, "593c6f6b541e49be95095be63970f335": { "views": [] }, "593d3f780c1a4180b83389afdb9fecfe": { "views": [] }, "5945f05889be40019f93a90ecd681125": { "views": [] }, "595c537ed2514006ac823b4090cf3b4b": { "views": [ { "cell_index": 27 } ] }, "599cfb7471ec4fd29d835d2798145a54": { "views": [] }, "5a8d17dc45d54463a6a49bad7a7d87ac": { "views": [] }, "5bb323bde7e4454e85aa18fda291e038": { "views": [] }, "5bc5e0429c1e4863adc6bd1ff2225b6d": { "views": [] }, "5bd0fafc4ced48a5889bbcebc9275e40": { "views": [] }, "5ccf965356804bc38c94b06698a2c254": { "views": [] }, "5d1f96bedebf489cac8f820c783f7a14": { "views": [] }, "5d3fc58b96804b57aad1d67feb26c70a": { "views": [] }, "5d41872e720049198a319adc2f476276": { "views": [] }, "5d7a630da5f14cd4969b520c77bc5bc5": { "views": [] }, "5da153e0261e43af8fd1c3c5453cace0": { "views": [] }, "5dde90afb01e44888d3c92c32641d4e2": { "views": [] }, "5de2611543ff4475869ac16e9bf406fd": { "views": [] }, "5e03db9b91124e79b082f7e3e031a7d3": { "views": [] }, "5e576992ccfe4bb383c88f80d9746c1d": { "views": [] }, "5e91029c26c642a9a8c90186f3acba8e": { "views": [] }, "5ea2a6c21b9845d18f72757ca5af8340": { "views": [] }, "5ef08dc24584438c8bc6c618763f0bc8": { "views": [] }, "5f823979d2ce4c34ba18b4ca674724e4": { "views": [ { "cell_index": 27 } ] }, "5fc7b070fc1a4e809da4cda3a40fc6d9": { "views": [] }, "601ca9a27da94a6489d62ac26f2805a9": { "views": [] }, "605cbb1049a4462e9292961e62e55cee": { "views": [] }, "60addd9bec3f4397b20464fdbcf66340": { "views": [] }, "60e17d6811c64dc8a69b342abe20810a": { "views": [] }, "611840434d9046488a028618769e4b86": { "views": [] }, "627ab7014bbf404ba8190be17c22e79d": { "views": [] }, "633aa1edce474560956be527039800e7": { "views": [] }, "63b6e287d1aa48efad7c8154ddd8f9c4": { "views": [] }, "63dcfdb9749345bab675db257bda4b81": { "views": [] }, "640ba8cc905a4b47ad709398cc41c4e3": { "views": [] }, "644dcff39d7c47b7b8b729d01f59bee5": { "views": [ { "cell_index": 27 } ] }, "6455faf9dbc6477f8692528e6eb90c9a": { "views": [ { "cell_index": 27 } ] }, "64ca99573d5b48d2ba4d5815a50e6ffe": { "views": [] }, "65d7924ba8c44d3f98a1d2f02dc883f1": { "views": [] }, "665ed2b201144d78a5a1f57894c2267c": { "views": [ { "cell_index": 27 } ] }, "66742844c1cd47ddbbe9aacf2e805f36": { "views": [] }, "6678811915f14d0f86660fe90f63bd60": { "views": [] }, "66a04a5cf76e429cadbebfc527592195": { "views": [] }, "66e5c563ffe94e29bab82fdecbd1befa": { "views": [] }, "673066e0bb0b40e288e6750452c52bf6": { "views": [] }, "67ae0fb9621d488f879d0e3c458e88e9": { "views": [] }, "687702eca5f74e458c8d43447b3b9ed5": { "views": [] }, "68a4135d6f0a4bae95130539a2a44b3c": { "views": [] }, "68c3a74e9ea74718b901c812ed179f47": { "views": [] }, "694bd01e350449c2a40cd4ffc5d5a873": { "views": [] }, "6981c38c44ad4b42bfb453b36d79a0e6": { "views": [] }, "69e08ffffce9464589911cc4d2217df2": { "views": [] }, "6a28f605a5d14589907dba7440ede2fc": { "views": [ { "cell_index": 27 } ] }, "6a74dc52c2a54837a64ad461e174d4e0": { "views": [] }, "6ad1e0bf705141b3b6e6ab7bd6f842ea": { "views": [] }, "6b37935db9f44e6087d1d262a61d54ac": { "views": [] }, "6b402f0f3afb4d0dad0e2fa8b71aa890": { "views": [] }, "6bc95be59a054979b142d2d4a8900cf2": { "views": [] }, "6ce0ea52c2fc4a18b1cce33933df2be4": { "views": [] }, "6d7effd6bc4c40a4b17bf9e136c5814c": { "views": [ { "cell_index": 27 } ] }, "6d9a639e949c4d1d8a7826bdb9e67bb5": { "views": [] }, "6e18fafd95744f689c06c388368f1d21": { "views": [] }, "6e2bc4a1e3424e2085d0363b7f937884": { "views": [] }, "6e30c494930c439a996ba7c77bf0f721": { "views": [] }, "6e682d58cc384145adb151652f0e3d15": { "views": [] }, "6f08def65d27471b88fb14e9b63f9616": { "views": [] }, "6f20c1dc00ef4a549cd9659a532046bf": { "views": [] }, "6f605585550d4879b2f27e2fda0192be": { "views": [] }, "706dd4e39f194fbbba6e34acd320d1c3": { "views": [] }, "70f21ab685dc4c189f00a17a1810bbad": { "views": [] }, "7101b67c47a546c881fdaf9c934c0264": { "views": [] }, "71b0137b5ed741be979d1896762e5c75": { "views": [] }, "7223df458fdf4178af0b9596e231c09c": { "views": [] }, "7262519db6f94e2a9006c68c20b79d29": { "views": [] }, "72dfe79a3e52429da1cf4382e78b2144": { "views": [ { "cell_index": 27 } ] }, "72e8d31709eb4e3ea28af5cb6d072ab2": { "views": [] }, "73647a1287424ee28d2fb3c4471d720c": { "views": [] }, "739c5dde541a41e1afae5ba38e4b8ee3": { "views": [] }, "74187cc424a347a5aa73b8140772ec68": { "views": [] }, "7418edf751a6486c9fae373cde30cb74": { "views": [] }, "744302ec305b4405894ed1459b9d41d0": { "views": [] }, "74dfbaa15be44021860f7ba407810255": { "views": [] }, "750a30d80fd740aaabc562c0564f02a7": { "views": [] }, "75e344508b0b45d1a9ae440549d95b1a": { "views": [ { "cell_index": 27 } ] }, "766efd1cfee542d3ba068dfa1705c4eb": { "views": [] }, "7738084e8820466f9f763d49b4bf7466": { "views": [] }, "781855043f1147679745947ff30308fa": { "views": [] }, "78e2cfb79878452fa4f6e8baea88f822": { "views": [] }, "796027b3dd6b4b888553590fecd69b29": { "views": [] }, "7a302f58080c4420b138db1a9ed8103e": { "views": [] }, "7a3c362499f54884b68e951a1bcfc505": { "views": [] }, "7a4ee63f5f674454adf660bfcec97162": { "views": [] }, "7ac2c18126414013a1b2096233c88675": { "views": [] }, "7b1e3c457efa4f92ab8ff225a1a2c45e": { "views": [] }, "7b8897b4f8094eef98284f5bb1ed5d51": { "views": [] }, "7bbfd7b13dd242f0ac15b36bb437eb22": { "views": [] }, "7d3c88bc5a0f4b428174ff33d5979cfd": { "views": [] }, "7d4f53bd14d44f3f80342925f5b0b111": { "views": [] }, "7d95ca693f624336a91c3069e586ef1b": { "views": [] }, "7dcdc07b114e4ca69f75429ec042fabf": { "views": [] }, "7e79b941d7264d27a82194c322f53b80": { "views": [] }, "7f2f98bbffc0412dbb31c387407a9fed": { "views": [ { "cell_index": 27 } ] }, "7f4688756da74b369366c22fd99657f4": { "views": [] }, "7f7ed281359f4a55bbe75ce841dd1453": { "views": [] }, "7fdf429182a740a097331bddad58f075": { "views": [] }, "81b312df679f4b0d8944bc680a0f517e": { "views": [] }, "82036e8fa76544ae847f2c2fc3cf72c2": { "views": [] }, "821f1041188a43a4be4bdaeb7fa2f201": { "views": [] }, "827358a9b4ce49de802df37b7b673aea": { "views": [] }, "82db288a0693422cbd846cc3cb5f0415": { "views": [] }, "82e2820c147a4dff85a01bcddbad8645": { "views": [ { "cell_index": 27 } ] }, "82f795491023435e8429ea04ff4dc60a": { "views": [] }, "8317620833b84ccebc4020d90382e134": { "views": [] }, "8346e26975524082af27967748792444": { "views": [] }, "83f8ed39d0c34dce87f53f402d6ee276": { "views": [] }, "844ac22a0ebe46db84a6de7472fe9175": { "views": [] }, "849948fe6e3144e1b05c8df882534d5a": { "views": [] }, "85058c7c057043b185870da998e4be61": { "views": [] }, "85443822f3714824bec4a56d4cfed631": { "views": [] }, "8566379c7ff943b0bb0f9834ed4f0223": { "views": [] }, "85a3c6f9a0464390be7309edd36c323c": { "views": [] }, "85d7a90fbac640c9be576f338fa25c81": { "views": [] }, "85f31444b4e44e11973fd36968bf9997": { "views": [] }, "867875243ad24ff6ae39b311efb875d3": { "views": [] }, "8698bede085142a29e9284777f039c93": { "views": [] }, "86bf40f5107b4cb6942800f3930fdd41": { "views": [] }, "874c486c4ebb445583bd97369be91d9b": { "views": [] }, "87c469625bda412185f8a6c803408064": { "views": [] }, "87d4bd76591f4a9f991232ffcff3f73b": { "views": [] }, "87df3737c0fc4e848fe4100b97d193df": { "views": [] }, "886b599c537b467ab49684d2c2f8fb78": { "views": [] }, "889e19694e8043e289d8efc269eba934": { "views": [] }, "88c628983ad1475ea3a9403f6fea891c": { "views": [] }, "88c807c411d34103ba2e31b2df28b947": { "views": [] }, "895ddca8886b4c06ad1d71326ca2f0af": { "views": [] }, "899cc011a1bd4046ac798bc5838c2150": { "views": [] }, "89d0e7a3090c47df9689d8ca28914612": { "views": [] }, "89ea859f8bbd48bb94b8fa899ab69463": { "views": [] }, "8a600988321e4e489450d26dedaa061f": { "views": [] }, "8adcca252aff41a18cca5d856c17e42f": { "views": [] }, "8b2fe9e4ea1a481089f73365c5e93d8b": { "views": [] }, "8b5acd50710c4ca185037a73b7c9b25c": { "views": [] }, "8bbdba73a1454cac954103a7b1789f75": { "views": [] }, "8cffde5bdb3d4f7597131b048a013929": { "views": [ { "cell_index": 27 } ] }, "8db2abcad8bc44df812d6ccf2d2d713c": { "views": [ { "cell_index": 27 } ] }, "8dd5216b361c44359ba1233ee93683a4": { "views": [ { "cell_index": 27 } ] }, "8e13719438804be4a0b74f73e25998cd": { "views": [] }, "8eb4ff3279fe4d43a9d8ee752c78a956": { "views": [] }, "8f577d437d4743fd9399fefcd8efc8cb": { "views": [] }, "8f8fbe8fd1914eae929069aeeac16b6d": { "views": [] }, "8f9b8b5f7dd6425a9e8e923464ab9528": { "views": [] }, "8f9e3422db114095a72948c37e98dd3e": { "views": [] }, "8fd325068289448d990b045520bad521": { "views": [] }, "9039bc40a5ad4a1c87272d82d74004e2": { "views": [] }, "90bf5e50acbb4bccad380a6e33df7e40": { "views": [] }, "91028fc3e4bc4f6c8ec752b89bcf3139": { "views": [] }, "9274175be7fb47f4945e78f96d39a7a6": { "views": [] }, "929245675b174fe5bfa102102b8db897": { "views": [] }, "92be1f7fb2794c9fb25d7bbb5cbc313d": { "views": [] }, "933904217b6045c1b654b7e5749203f5": { "views": [ { "cell_index": 27 } ] }, "936bc7eb12e244c196129358a16e14bb": { "views": [] }, "936c09f4dde8440b91e9730a0212497c": { "views": [] }, "9406b6ae7f944405a0e8a22f745a39b2": { "views": [] }, "942a96eea03740719b28fcc1544284d4": { "views": [] }, "94840e902ffe4bbba5b374ff4d26f19f": { "views": [] }, "948d01f0901545d38e05f070ce4396e4": { "views": [] }, "94e2a0bc2d724f7793bb5b6d25fc7088": { "views": [] }, "94f2b877a79142839622a61a3a081c03": { "views": [ { "cell_index": 27 } ] }, "94f30801a94344129363c8266bf2e1f8": { "views": [] }, "95b127e8aff34a76a813783a6a3c6369": { "views": [] }, "95d44119bf714e42b163512d9a15bbc5": { "views": [] }, "95f016e9ea9148a4a3e9f04cb8f5132d": { "views": [] }, "968e9e9de47646409744df3723e87845": { "views": [] }, "97207358fc65430aa196a7ed78b252f0": { "views": [ { "cell_index": 27 } ] }, "9768d539ee4044dc94c0bd5cfb827a18": { "views": [] }, "98587702cc55456aa881daf879d2dc8d": { "views": [] }, "986c6c4e92964759903d6eb7f153df8a": { "views": [ { "cell_index": 27 } ] }, "987d808edd63404f8d6f2ce42efff33a": { "views": [] }, "9895c26dfb084d509adc8abc3178bad3": { "views": [] }, "994bc7678f284a24a8700b2a69f09f8d": { "views": [] }, "99eee4e3d9c34459b12fe14cee543c28": { "views": [] }, "9a5c0b0805034141a1c96ddd57995a3c": { "views": [] }, "9a7862bb66a84b4f897924278a809ef3": { "views": [] }, "9b812f733f6a4b60ba4bf725959f7913": { "views": [] }, "9bb5ae9ff9c94fe7beece9ce43f519af": { "views": [] }, "9bfde7b437fb4e76a16a49574ea5b7ec": { "views": [] }, "9c1d14484b6d4ab3b059731f17878d14": { "views": [] }, "9c7a66ead55e48c8b92ef250a5a464b7": { "views": [] }, "9ce50a53aafe439ebb19fff363c1bfe2": { "views": [] }, "9d5e9658af264ad795f6a5f3d8c3c30f": { "views": [ { "cell_index": 27 } ] }, "9d7aa65511b6482d9587609ad7898f54": { "views": [ { "cell_index": 27 } ] }, "9d87f94baf454bd4b529e55e0792a696": { "views": [] }, "9de4bd9c6a7b4f3dbd401df15f0b9984": { "views": [] }, "9dfd6b08a2574ed89f0eb084dae93f73": { "views": [] }, "9e1dffcb1d9d48aaafa031da2fb5fed9": { "views": [] }, "9efb46d2bb0648f6b109189986f4f102": { "views": [ { "cell_index": 27 } ] }, "9f1439500d624f769dd5e5c353c46866": { "views": [] }, "9f27ba31ccc947b598dc61aefca16a7f": { "views": [] }, "9f31a58b6e8e4c79a92cf65c497ee000": { "views": [] }, "9f43f85a0fb9464e9b7a25a85f6dba9c": { "views": [ { "cell_index": 27 } ] }, "9f4970dc472946d48c14e93e7f4d4b70": { "views": [] }, "9f5dd25217a84799b72724b2a37281ea": { "views": [] }, "9faa50b44e1842e0acac301f93a129c4": { "views": [ { "cell_index": 27 } ] }, "a0202917348d4c41a176d9871b65b168": { "views": [] }, "a058f021f4ca4daf8ab830d8542bf90b": { "views": [] }, "a0a2dded995543a6b68a67cd91baa252": { "views": [] }, "a0e170b3ea484fd984985d2607f90ef3": { "views": [] }, "a168e79f4cbb44c8ac7214db964de5f2": { "views": [] }, "a182b774272b48238b55e3c4d40e6152": { "views": [] }, "a1840ca22d834df2b145151baf6d8241": { "views": [ { "cell_index": 27 } ] }, "a1bb2982e88e4bb1a2729cc08862a859": { "views": [] }, "a1d897a6094f483d8fc9a3638fbc179d": { "views": [] }, "a231ee00d2b7404bb0ff4e303c6b04ee": { "views": [] }, "a29fdc2987f44e69a0343a90d80c692c": { "views": [] }, "a2de3ac1f4fe423997c5612b2b21c12f": { "views": [] }, "a30ba623acec4b03923a2576bcfcbdf5": { "views": [] }, "a3357d5460c5446196229eae087bb19e": { "views": [] }, "a358d9ecd754457db178272315151fa3": { "views": [] }, "a35aec268ac3406daa7fe4563f83f948": { "views": [] }, "a38c5ed35b9945008341c2d3c0ef1470": { "views": [] }, "a39cfb47679c4d2895cda12c6d9d2975": { "views": [ { "cell_index": 27 } ] }, "a55227f2fd5d42729fc4fd39a8c11914": { "views": [] }, "a65af2c8506d47ec803c15815e2ab445": { "views": [] }, "a6d2366540004eeaab760c8be196f10a": { "views": [] }, "a709f15a981a468b9471a0f672f961a7": { "views": [] }, "a7258472ad944d038cd227de28d9155f": { "views": [] }, "a72eb43242c34ef19399c52a77da8830": { "views": [] }, "a7568aed621548649e37cfa6423ca198": { "views": [] }, "a83f7f5c09a845ecb3f5823c1d178a54": { "views": [] }, "a87c651448f14ce4958d73c2f1e413e1": { "views": [ { "cell_index": 27 } ] }, "a8e78f5bc64e412ab44eb9c293a7e63b": { "views": [] }, "a996d507452241e0b99aabe24eecbdd9": { "views": [] }, "a9a4b7a2159e40f8aa93a50f11048342": { "views": [] }, "a9cc48370b964a888f8414e1742d6ff2": { "views": [] }, "a9dcbe9e9a4445bf9cf8961d4c1214a6": { "views": [] }, "aab29dfddb98416ea815475d6c6a3eed": { "views": [] }, "ab89783a86bc4939a5f78957f4019553": { "views": [] }, "abaee5bb577d4a68b6898d637a4c7898": { "views": [] }, "abecb04251e04260860074b8bdad088a": { "views": [] }, "acc07b8cf2cf4d50ae1bceef2254637f": { "views": [] }, "ae3ee1ee05a2443c8bf2f79cd9e86e56": { "views": [] }, "ae4e85e2bceb4ec783dbfaaf3a174ea7": { "views": [] }, "aec1a51db98f470cb0854466f3461fc1": { "views": [] }, "afc5dccd3db64a1592ee0b2fd516b71d": { "views": [] }, "afe28f5bae8941b19717e3d7285ddc61": { "views": [] }, "b00516b171544bca9113adc99ed528a1": { "views": [] }, "b005d7f2afbe479eb02678447a079a1a": { "views": [] }, "b020ad1a7750461bb79fe4e74b9384f6": { "views": [] }, "b07d0aab375142978e1261a6a4c94b10": { "views": [] }, "b2c18df5c51649cdbdaf64092fc945b3": { "views": [] }, "b410c14ee52d4af49c08da115db85ac7": { "views": [] }, "b41220079b2b49c2ba6f59dcfe9e7757": { "views": [] }, "b445a187ca6943bbb465782a67288ce5": { "views": [] }, "b4dfb435038645dc9673ea4257fc26f3": { "views": [] }, "b5633708bd8b4abdaec77a96aca519bb": { "views": [] }, "b59b2622026d4ec582354d919e16f658": { "views": [] }, "b635f31747e14f989c7dee2ba5d5caa5": { "views": [] }, "b63dfdde813a4f019998e118b5168943": { "views": [] }, "b6c3d440986d44ed88a9471a69b70e05": { "views": [] }, "b6ee195c9bfd48ee8526b8cf0f3322b9": { "views": [] }, "b7064dd21c9949d79f40c73fee431dff": { "views": [] }, "b7537298609f4d64b8e36692b84f376c": { "views": [] }, "b755013f41fa4dce8e2bab356d85d26d": { "views": [] }, "b7cd4bfabc2e40fe9f30de702ae63716": { "views": [] }, "b7e4c497ff5c4173961ffdc3bd3821a9": { "views": [ { "cell_index": 27 } ] }, "b821a13ce3e8453d85f07faccc95fee1": { "views": [] }, "b86ea9c1f1ee45a380e35485ad4e2fac": { "views": [] }, "b87f4d4805944698a0011c10d626726c": { "views": [] }, "b8e173c7c8be41df9161cbbe2c4c6c86": { "views": [] }, "b9322adcd8a241478e096aa1df086c78": { "views": [] }, "b9ad471398784b6889ce7a1d2ef5c4c0": { "views": [] }, "b9c138598fce460692cc12650375ee52": { "views": [ { "cell_index": 27 } ] }, "ba146eb955754db88ba6c720e14ea030": { "views": [] }, "ba48cba009e8411ea85c7e566a47a934": { "views": [] }, "bb2793de83a64688b61a2007573a8110": { "views": [] }, "bb53891d7f514a17b497f699484c9aed": { "views": [] }, "bbe5dea9d57d466ba4e964fce9af13cf": { "views": [ { "cell_index": 27 } ] }, "bbe88faf528d44a0a9083377d733d66a": { "views": [] }, "bc0525d022404722a921132e61319e46": { "views": [] }, "bc320fb35f5744cc82486b85f7a53b6f": { "views": [] }, "bc900e9562c546f9ae3630d5110080ec": { "views": [] }, "bcbf6b3ff19d4eb5aa1b8a57672d7f6f": { "views": [] }, "bccf183ccb0041e380732005f2ca2d0a": { "views": [] }, "bd0d18e3441340a7a56403c884c87a8e": { "views": [] }, "bd21e4fe92614c22a76ae515077d2d11": { "views": [] }, "bd5b05203cfd402596a6b7f076c4a8f8": { "views": [] }, "beb0c9b29d8d4d69b3147af666fa298b": { "views": [ { "cell_index": 27 } ] }, "bf0d147a6a1346799c33807404fa1d46": { "views": [] }, "c03d4477fa2a423dba6311b003203f62": { "views": [] }, "c05697bcb0a247f78483e067a93f3468": { "views": [] }, "c09c3d0e94ca4e71b43352ca91b1a88a": { "views": [] }, "c0d015a0930e4ddf8f10bbace07c0b24": { "views": [] }, "c15edd79a0fd4e24b06d1aae708a38c4": { "views": [] }, "c20b6537360f4a70b923e6c5c2ba7d9b": { "views": [] }, "c21fff9912924563b28470d32f62cd44": { "views": [] }, "c2482621d28542268a2b0cbf4596da37": { "views": [] }, "c25bd0d8054b4508a6b427447b7f4576": { "views": [] }, "c301650ac4234491af84937a8633ad76": { "views": [] }, "c333a0964b1e43d0817e73cb47cf0317": { "views": [] }, "c36213b1566843ceb05b8545f7d3325c": { "views": [] }, "c37d0add29fa4f41a47caf6538ec6685": { "views": [] }, "c409a01effb945c187e08747e383463c": { "views": [] }, "c4e104a7b731463688e0a8f25cf50246": { "views": [] }, "c54f609af4e94e93b57304bc55e02eba": { "views": [] }, "c576bf6d24184f3a9f31d4f40231ce87": { "views": [] }, "c58ab80a895344008b5aadd8b8c628a4": { "views": [] }, "c5d28bea41da447e88f4cec9cfaaf197": { "views": [] }, "c74bbd55a8644defa3fcef473002a626": { "views": [ { "cell_index": 27 } ] }, "c856e77b213b400599b6e026baaa4c85": { "views": [] }, "c894f9e350a1473abb28ff651443ae6f": { "views": [] }, "c8e3827ae28b45bc9768a8c3e35cc8b1": { "views": [] }, "c95bf1935b71400e98c63722b77caa08": { "views": [] }, "c9e5129d30ea4b78b846e8e92651b0e9": { "views": [] }, "ca2123c7b103485c851815cbcb4a6c17": { "views": [] }, "ca34917db02148168daf0c30ceed7466": { "views": [] }, "caa6adf7b0d243da8229c317c7482fe3": { "views": [] }, "cb924475ebb64e76964f88e830979d38": { "views": [] }, "cba1473ccaee4b2a89aba4d2b4b1e648": { "views": [] }, "cbd735eb8eb446069ee912d795ccaf14": { "views": [] }, "cc0ee37900ef40069515c79e99a9a875": { "views": [] }, "cc564bca35c743b89697f5cfd4ecccc2": { "views": [] }, "cc5a47588e2b4c8eb5deff560a0256c2": { "views": [] }, "ccc64ac3a8a84ae9815ff9e8bdc3279d": { "views": [] }, "cd02a06cec7342438f8585af6227db96": { "views": [] }, "cd236465e91d4a90a2347e6baab6ab71": { "views": [] }, "cd9a0aa1700a4407ab445053029dca18": { "views": [] }, "cdd6c6a945a74c568d611b42e4ba8a1a": { "views": [] }, "cdf0323ea1324c0b969f49176ecee1c2": { "views": [] }, "ce3a0e82e80d48b9b2658e0c52196644": { "views": [ { "cell_index": 27 } ] }, "ce6ad0459f654b6785b3a71ccdf05063": { "views": [] }, "ce8d3cd3535b459c823da2f49f3cc526": { "views": [ { "cell_index": 27 } ] }, "cf8c8f791d0541ffa4f635bb07389292": { "views": [] }, "cfed29ab68f244e996b0d571c31020ec": { "views": [] }, "d034cbd7b06a448f98b3f11b68520c08": { "views": [] }, "d13135f5facc4c5996549a85974145a1": { "views": [] }, "d18c7c17fa93493ebc622fe3d2c0d44e": { "views": [] }, "d23b743d7d0342aca257780f2df758d6": { "views": [] }, "d2fe43f4a2064078a6c8da47f8afb903": { "views": [] }, "d34f626ca035456bb9e0c9ad2a9dced1": { "views": [] }, "d359911be08f4342b20e86a954cd060f": { "views": [] }, "d4d76a1c09a342e79cd6733886626459": { "views": [] }, "d58d12f54e2b426fba4ca611b0ffc68f": { "views": [] }, "d5e2a77d429d4ca0969e1edec5dc2690": { "views": [] }, "d5f4bbe3242245f0a2c3b18a284e55f8": { "views": [] }, "d6c325f3069a4186b3022619f4280c37": { "views": [] }, "d6d46520bbcf495bad20bcd266fe1357": { "views": [] }, "d72b7c8058324d1bb56b6574090ccda6": { "views": [] }, "d73bbb49a33d49e187200fa7c8f23aaa": { "views": [] }, "d80e4f8eb9a54aef8b746e38d8c3ef1b": { "views": [] }, "d819255bc7104ee8b9466b149dba5bff": { "views": [] }, "d819fcff913441d39a41982518127af5": { "views": [] }, "d8295021db704345a63c9ff9d692b761": { "views": [] }, "d83329fe36014f85bb5d0247d3ae4472": { "views": [ { "cell_index": 27 } ] }, "d88a0305cc224037a14e5040ed8e13af": { "views": [] }, "d89b81d63c6048ff800d3380bf921ac0": { "views": [] }, "d8d8667ab50944e4b066d648aa3c8e2a": { "views": [] }, "d8fd2b5ef6e24628b2b5102d3cd375f3": { "views": [] }, "d9579a126d5f44a3bc0a731e0ad55f24": { "views": [] }, "da51bd4d4fd848699919e3973b2fabc2": { "views": [] }, "dba5a5a8fec346b2bcdc88f4ce294550": { "views": [] }, "dc201c38ac434cb8a424553f1fa5a791": { "views": [] }, "dc631df85ae84ffc964acd7a76e399ce": { "views": [] }, "dc7376a2272e44179f237e5a1c7f6a49": { "views": [ { "cell_index": 27 } ] }, "dc8a45203a0a457c927f582f9d576e5d": { "views": [] }, "dcc0e1ea9e994fc0827d9d7f648e4ad9": { "views": [] }, "dce6f4cb98094ee1b06c0dd0ff8f488a": { "views": [] }, "dcfc688de41b4ed7a8f89ae84089d5c0": { "views": [] }, "dd486b2cbda84c83ace5ceaee8a30ff8": { "views": [] }, "ddcfbf7b97714357920ba9705e8d4ab0": { "views": [] }, "ddd4485714564c65b70bd865783076af": { "views": [] }, "de7738417f1040b1a06ad25e485eb91d": { "views": [] }, "df4cada92e484fd4ae75026eaf1845e2": { "views": [] }, "dfb3707b4a01441c8a0a1751425b8e1c": { "views": [] }, "e03b701a52d948aab86117c928cbe275": { "views": [] }, "e0a614fe085c4d3c835c78d6ada60a40": { "views": [] }, "e138e0c7d5a4471d99bbdac50de00fe1": { "views": [] }, "e154289ce1774450a9a51ac45a1d5725": { "views": [] }, "e25c1d2c78c94c9a805920df36268508": { "views": [] }, "e281172ebc7f48b5ae6545b16da79477": { "views": [] }, "e2862bd7efac4bc0b23532705f5e46c4": { "views": [] }, "e2cd9bb21f254e08885f43fd6e968879": { "views": [] }, "e2f4acecaf194351b8e67439440a9966": { "views": [] }, "e3198c124ac841a79db062efa81f6812": { "views": [] }, "e36f3009f61a4f5ba047562e70330add": { "views": [] }, "e3765274f28b4a55a82d9115ded151de": { "views": [] }, "e37e3fba3b40413180cd30e594bf62bd": { "views": [] }, "e3f9760867fa410fbdc4611aef1cee18": { "views": [] }, "e4331c134ab24f9cae99d476dfa04c89": { "views": [] }, "e46db59e121045169a1ea5313b1748b7": { "views": [] }, "e475d1e00f9d48edadac886fb53c2a20": { "views": [] }, "e48449d21c2d4360b851169468066470": { "views": [] }, "e4c26b8a42b54e959b276a174f2c2795": { "views": [] }, "e4e55dabd92f4c17b78ed4b6881842e8": { "views": [] }, "e4e5dd3dc28d4aa3ab8f8f7c4a475115": { "views": [ { "cell_index": 27 } ] }, "e516fd8ebfc6478c95130d6edec77c88": { "views": [] }, "e5afb8d0e8a94c4dac18f2bbf1d042ce": { "views": [] }, "e5bcb13bf2e94afc857bcbb37f6d4d87": { "views": [] }, "e64ab85e80184b70b69d01a9c6851943": { "views": [ { "cell_index": 27 } ] }, "e66b26fb788944ba83b7511d79b85dc5": { "views": [] }, "e73434cfcc854429ac27ddc9c9b07f5e": { "views": [] }, "e7a8244ea5a84493b3b5bdeaf92a50b4": { "views": [] }, "e81ed2c281df4f06bc1d4e6b67c574b4": { "views": [] }, "e85ff7ccdc034c268df9cb0e95e9b850": { "views": [] }, "e8a198bff55a437eab56887563cd9a6e": { "views": [] }, "e92ede4cfc96436b84e63809bcb22385": { "views": [] }, "e949474f6aa64c5dada603476ea6cabd": { "views": [] }, "e98e59c3156c49c1bb27be7a478c3654": { "views": [] }, "e9ea6f88d1334fbcab7f9c9a11cf4a50": { "views": [] }, "ea09e5da878c42f2b533856dc3149e3e": { "views": [] }, "ea74036074054593b1cc31fec030d2a2": { "views": [] }, "ea8d97fb8c0d499095cceb133e4d7d9c": { "views": [] }, "eafbea5bce1f4ab4bcbb0aa08598af0f": { "views": [] }, "ec01e6cdc5a54f068f1bb033415b4a06": { "views": [] }, "ec2d1f18f2e841b184f5d4cd15979d46": { "views": [] }, "ec923af478b94ad99bdfd3257f48cb06": { "views": [] }, "ed02e2272e844678979bd6a3c00f5cb3": { "views": [] }, "ed80296f5f5e42e694dfc5cc7fd3acee": { "views": [] }, "ee4df451ca9d4ed48044b25b19dc3f3f": { "views": [] }, "ee77219007884e089fc3c1479855c469": { "views": [] }, "ef372681937b4e90a04b0d530b217edb": { "views": [] }, "ef452efe39d34db6b4785cb816865ca3": { "views": [] }, "efcb07343f244ff084ea49dbc7e3d811": { "views": [] }, "f083a8e4c8574fe08f5eb0aac66c1e71": { "views": [] }, "f09d7c07bec64811805db588515af7f6": { "views": [] }, "f0ef654c93974add9410a6e243e0fbf2": { "views": [] }, "f20d7c2fcf144f5da875c6af5ffd35cb": { "views": [] }, "f234eb38076146b9a640f44b7ef30892": { "views": [] }, "f24d087598434ed1bb7f5ae3b0b4647a": { "views": [] }, "f262055f3f1b48029f9e2089f752b0b8": { "views": [ { "cell_index": 27 } ] }, "f2d40a380f884b1b95992ccc7c3df04e": { "views": [] }, "f2e2e2e5177542aa9e5ca3d69508fb89": { "views": [] }, "f31914f694384908bec466fc2945f1c7": { "views": [] }, "f31cbea99df94f2281044c369ef1962d": { "views": [] }, "f32c6c5551f540709f7c7cd9078f1aad": { "views": [] }, "f337eb824d654f0fbd688e2db3c5bf7b": { "views": [] }, "f36f776a7767495cbda2f649c2b3dd48": { "views": [] }, "f3cef080253c46989413aad84b478199": { "views": [] }, "f3df35ce53e0466e81a48234b36a1430": { "views": [ { "cell_index": 27 } ] }, "f3fa0f8a41ab4ede9c4e20f16e35237d": { "views": [] }, "f42e4f996f254a1bb7fe6f4dfc49aba3": { "views": [] }, "f437babcddc64a8aa238fc7013619fbb": { "views": [] }, "f44a5661ed1f4b5d97849cf4bb5e862e": { "views": [] }, "f44d24e28afa475da40628b4fd936922": { "views": [] }, "f44d5e6e993745b8b12891d1f3af3dc3": { "views": [] }, "f457cb5e76be46a29d9f49ba0dc135f1": { "views": [] }, "f4691cbe84534ef6b7d3fca530cf1704": { "views": [] }, "f4ca26fbbdbf49dda5d1b8affdecfa3e": { "views": [] }, "f54998361fe84a8a95b2607fbe367d52": { "views": [] }, "f54bdb1d3bfb47af9e7aaabb4ed12eff": { "views": [] }, "f54c28b82f7d498b83bf6908e19b6d1b": { "views": [] }, "f5cc05fcee4d4c3e80163c6e9c072b6e": { "views": [] }, "f621b91a209e4997a47cf458f8a5027f": { "views": [] }, "f665bf176eb443f6867cef8fdd79b4e5": { "views": [] }, "f6e27824f5e84bd8b4671e9eb030b20f": { "views": [] }, "f6f162ac0811434ea95875f6335bd484": { "views": [] }, "f6f629e6fb164c97acdc50c25d1354ee": { "views": [] }, "f71adee125f74ddd8302aa2796646d67": { "views": [] }, "f731d66445aa4543800a6bb3e9267936": { "views": [] }, "f8f8e8c27fff45afa309a849d1655e29": { "views": [] }, "f913752b9e86487cb197f894d667d432": { "views": [] }, "f92cde8d24064ae5afd4cd577eaa895a": { "views": [] }, "f944674b7ca345a582de627055614499": { "views": [] }, "f9458080ed534d25856c67ce8f93d5a1": { "views": [ { "cell_index": 27 } ] }, "f986f98d05dd4b9fa8a3c1111c1cea9b": { "views": [] }, "f9f7bc097f654e41b68f2d849c99a1a1": { "views": [] }, "fa00693458bc45669e2ed4ee536e98d6": { "views": [] }, "fa2f219e60ff453da3842df62a371813": { "views": [] }, "fa6cbfe76fff48848dc08a9344de84ff": { "views": [] }, "fb3b6d5e405d4e1b87e82bcc8ae3df0f": { "views": [] }, "fbe27ee7dc93467292b67f68935ae6f0": { "views": [] }, "fc494b2bcade4c3a890f08386dd8aab0": { "views": [] }, "fd98ac9b76cc44f09bc3b684caf1882d": { "views": [] }, "feb9bf5d951c40d4a87d57a4de5e819a": { "views": [] }, "fedfd679505d409fa74ccaa52b87fcce": { "views": [] }, "fef0278d4386407f96c44b4affe437b8": { "views": [] }, "ff29b06d50b048d6bbcbdb5a8665dcde": { "views": [] }, "ff3c868e31c0430dbf5b85415da9a24b": { "views": [] }, "ff8a91a101044f4fba19cdfffc39e0d3": { "views": [] }, "ffbca26ec77b492bbbda1be40b044d8e": { "views": [] }, "fff5f5bc334942bd851ac24f782f4f3c": { "views": [] } }, "version": "1.1.1" } }, "nbformat": 4, "nbformat_minor": 1 }