{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# DiscreteDP\n", "\n", "***Implementation Details***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Daisuke Oyama** \n", "*Faculty of Economics, University of Tokyo*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook describes the implementation details of the DiscreteDP type and its methods." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the theoretical background and notation,\n", "see the lecture [Discrete Dynamic Programming](http://quant-econ.net/py/discrete_dp.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution methods" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following solution algorithms are currently implemented for the DiscreteDP type:\n", "\n", "* value iteration;\n", "* policy iteration (default);\n", "* modified policy iteration.\n", "\n", "Policy iteration computes an exact optimal policy in finitely many iterations,\n", "while value iteration and modified policy iteration return an $\\varepsilon$-optimal policy\n", "for a prespecified value of $\\varepsilon$.\n", "\n", "Value iteration relies on (only) the fact that\n", "the Bellman operator $T$ is a contraction mapping\n", "and thus iterative application of $T$ to any initial function $v^0$\n", "converges to its unique fixed point $v^*$.\n", "\n", "Policy iteration more closely exploits the particular structure of the problem,\n", "where each iteration consists of a policy evaluation step,\n", "which computes the value $v_{\\sigma}$ of a policy $\\sigma$\n", "by solving the linear equation $v = T_{\\sigma} v$,\n", "and a policy improvement step, which computes a $v_{\\sigma}$-greedy policy.\n", "\n", "Modified policy iteration replaces the policy evaluation step\n", "in policy iteration with \"partial policy evaluation\",\n", "which computes an approximation of the value of a policy $\\sigma$\n", "by iterating $T_{\\sigma}$ for a specified number of times." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we describe our implementation of these algorithms more in detail. \n", "(While not explicit, in the actual implementation each algorithm is terminated\n", "when the number of iterations reaches max_iter.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "solve(ddp, v_init, VFI; max_iter, epsilon)\n", "\n", "1. Choose any $v^0 \\in \\mathbb{R}^n$, and\n", " specify $\\varepsilon > 0$; set $i = 0$.\n", "2. Compute $v^{i+1} = T v^i$.\n", "3. If $\\lVert v^{i+1} - v^i\\rVert < [(1 - \\beta) / (2\\beta)] \\varepsilon$,\n", " then go to step 4;\n", " otherwise, set $i = i + 1$ and go to step 2.\n", "4. Compute a $v^{i+1}$-greedy policy $\\sigma$, and return $v^{i+1}$ and $\\sigma$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given $\\varepsilon > 0$,\n", "the value iteration algorithm terminates in a finite number of iterations,\n", "and returns an $\\varepsilon/2$-approximation of the optimal value funciton and\n", "an $\\varepsilon$-optimal policy function\n", "(unless max_iter is reached)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Policy iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "solve(ddp, v_init, PFI; max_iter)\n", "\n", "1. Choose any $v^0 \\in \\mathbb{R}^n$ and compute a $v^0$-greedy policy $\\sigma^0$;\n", " set $i = 0$.\n", "2. [Policy evaluation]\n", " Compute the value $v_{\\sigma^i}$ by solving the equation $v = T_{\\sigma^i} v$.\n", "3. [Policy improvement]\n", " Compute a $v_{\\sigma^i}$-greedy policy $\\sigma^{i+1}$;\n", " let $\\sigma^{i+1} = \\sigma^i$ if possible.\n", "4. If $\\sigma^{i+1} = \\sigma^i$,\n", " then return $v_{\\sigma^i}$ and $\\sigma^{i+1}$;\n", " otherwise, set $i = i + 1$ and go to step 2." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The policy iteration algorithm terminates in a finite number of iterations, and\n", "returns an optimal value function and an optimal policy function\n", "(unless max_iter is reached)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Modified policy iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "solve(ddp, v_init, MPFI; max_iter, epsilon, k)\n", "\n", "1. Choose any $v^0 \\in \\mathbb{R}^n$, and\n", " specify $\\varepsilon > 0$ and $k \\geq 0$;\n", " set $i = 0$.\n", "2. [Policy improvement]\n", " Compute a $v^i$-greedy policy $\\sigma^{i+1}$;\n", " let $\\sigma^{i+1} = \\sigma^i$ if possible (for $i \\geq 1$).\n", "3. Compute $u = T v^i$ ($= T_{\\sigma^{i+1}} v^i$).\n", " If $\\mathrm{span}(u - v^i) < [(1 - \\beta) / \\beta] \\varepsilon$, then go to step 5;\n", " otherwise go to step 4.\n", "4. [Partial policy evaluation]\n", " Compute $v^{i+1} = (T_{\\sigma^{i+1}})^k u$ ($= (T_{\\sigma^{i+1}})^{k+1} v^i$).\n", " Set $i = i + 1$ and go to step 2.\n", "5. Return\n", " $v = u + [\\beta / (1 - \\beta)] [(\\min(u - v^i) + \\max(u - v^i)) / 2] \\mathbf{1}$\n", " and $\\sigma_{i+1}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given $\\varepsilon > 0$,\n", "provided that $v^0$ is such that $T v^0 \\geq v^0$,\n", "the modified policy iteration algorithm terminates in a finite number of iterations,\n", "and returns an $\\varepsilon/2$-approximation of the optimal value funciton and\n", "an $\\varepsilon$-optimal policy function\n", "(unless max_iter is reached)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Remarks*\n", "\n", "* Here we employ the termination criterion based on the *span semi-norm*,\n", " where $\\mathrm{span}(z) = \\max(z) - \\min(z)$ for $z \\in \\mathbb{R}^n$.\n", " Since $\\mathrm{span}(T v - v) \\leq 2\\lVert T v - v\\rVert$,\n", " this reaches $\\varepsilon$-optimality faster than the norm-based criterion\n", " as employed in the value iteration above.\n", "* Except for the termination criterion,\n", " modified policy is equivalent to value iteration if $k = 0$ and\n", " to policy iteration in the limit as $k \\to \\infty$.\n", "* Thus, if one would like to have value iteration with the span-based rule,\n", " run modified policy iteration with $k = 0$.\n", "* In returning a value function, our implementation is slightly different from\n", " that by Puterman (2005), Section 6.6.3, pp.201-202, which uses\n", " $u + [\\beta / (1 - \\beta)] \\min(u - v^i) \\mathbf{1}$.\n", "* The condition for convergence, $T v^0 \\geq v^0$, is satisfied\n", " for example when $v^0 = v_{\\sigma}$ for some policy $\\sigma$,\n", " or when $v^0(s) = \\min_{(s', a)} r(s', a)$ for all $s$.\n", " If v_init is not specified, it is set to the latter, $\\min_{(s', a)} r(s', a))$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Illustration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We illustrate the algorithms above\n", "by the simple example from Puterman (2005), Section 3.1, pp.33-35." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "using QuantEcon\n", "using DataFrames" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "n = 2 # Number of states\n", "m = 2 # Number of actions\n", "\n", "# Reward array\n", "R = [5 10; -1 -Inf]\n", "\n", "# Transition probability array\n", "Q = Array{Float64}(n, m, n)\n", "Q[1, 1, :] = [0.5, 0.5]\n", "Q[1, 2, :] = [0, 1]\n", "Q[2, 1, :] = [0, 1]\n", "Q[2, 2, :] = [0.5, 0.5] # Arbitrary\n", "\n", "# Discount rate\n", "beta = 0.95\n", "\n", "ddp = DiscreteDP(R, Q, beta);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Analytical solution:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "function sigma_star(beta)\n", " sigma = Vector{Int64}(2)\n", " sigma[2] = 1\n", " if beta > 10/11\n", " sigma[1] = 1\n", " else\n", " sigma[1] = 2\n", " end\n", " return sigma\n", "end\n", "\n", "function v_star(beta)\n", " v = Vector{Float64}(2)\n", " v[2] = -1 / (1 - beta)\n", " if beta > 10/11\n", " v[1] = (5 - 5.5*beta) / ((1 - 0.5*beta) * (1 - beta))\n", " else\n", " v[1] = (10 - 11*beta) / (1 - beta)\n", " end\n", " return v\n", "end;" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Int64,1}:\n", " 1\n", " 1" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sigma_star(beta)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Float64,1}:\n", " -8.57143\n", " -20.0 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "v_star(beta)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Value iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Solve the problem by value iteration;\n", "see Example 6.3.1, p.164 in Puterman (2005)." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "epsilon = 1e-2\n", "v_init = [0., 0.]\n", "res_vi = solve(ddp, v_init, VFI, epsilon=epsilon);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The number of iterations required to satisfy the termination criterion:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "162" ], "text/plain": [ "162" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_vi.num_iter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The returned value function:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Float64,1}:\n", " -8.56651\n", " -19.9951 " ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_vi.v" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is indeed an $\\varepsilon/2$-approximation of $v^*$:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/html": [ "true" ], "text/plain": [ "true" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "maximum(abs, res_vi.v - v_star(beta)) < epsilon/2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The returned policy function:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Int64,1}:\n", " 1\n", " 1" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_vi.sigma" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Value iteration converges very slowly.\n", "Let us replicate Table 6.3.1 on p.165:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "num_reps = 164\n", "values = Matrix{Float64}(num_reps, n)\n", "diffs = Vector{Float64}(num_reps)\n", "spans = Vector{Float64}(num_reps)\n", "v = [0, 0]\n", "\n", "values[1, :] = v\n", "diffs[1] = NaN\n", "spans[1] = NaN\n", "\n", "for i in 2:num_reps\n", " v_new = bellman_operator(ddp, v)\n", " values[i, :] = v_new\n", " diffs[i] = maximum(abs, v_new - v)\n", " spans[i] = maximum(v_new - v) - minimum(v_new - v)\n", " v = v_new\n", "end" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "
iv^i(1)v^i(2)‖v^i - v^(i-1)‖
100.00.0NaN
2110.0-1.010.0
329.274999999999999-1.950.95
438.479375-2.85250.9025000000000001
547.672765624999999-3.7098750.8573749999999998
656.882373046874999-4.5243812499999990.8145062499999995
766.120046103515625-5.2981621874999990.7737809374999998
875.390394860107422-6.0332540781249990.7350918906250001
984.694641871441651-6.7315913742187490.69833729609375
1094.032448986180879-7.3950118055078120.6634204312890626
11103.4027826608197067-8.025261215232420.630249409724609
1220-1.4017104317198141-12.830281551829150.37735360253530636
1330-4.278653292750171-15.707224721141240.22593554099256608
1440-6.001185440126598-17.429756868697920.1352759542790558
1550-7.032529065894289-18.4611004944657150.08099471081759191
1660-7.6500325916895076-19.0786040202609360.048494525249424214
1770-8.019754762693053-19.448326191264480.029035463617658408
1880-8.24112108372828-19.6696925122997080.0173846046158026
1990-8.373661277235374-19.80223270580680.010408804957535267
20100-8.453017987021873-19.88158941559330.006232136021406376
21110-8.500531780547469-19.92910320911890.0037314100463738953
22120-8.528980043854592-19.957551472426020.002234133030208696
23130-8.54601306995374-19.9745844985251680.0013376579723587412
24140-8.556211371866315-19.9847828004377450.0008009052401192207
25150-8.56231747193888-19.9908889005103060.00047953155208801945
26160-8.56597341960701-19.994544848178440.00028711325376562513
27161-8.566246177198087-19.9948176057695160.00027275759107681097
28162-8.56650529690961-19.9950767254810380.0002591197115222599
29163-8.566751460635556-19.9953228892069850.0002461637259472127
" ], "text/plain": [ "29×4 DataFrames.DataFrame\n", "│ Row │ i │ v^i(1) │ v^i(2) │ ‖v^i - v^(i-1)‖ │\n", "├─────┼─────┼──────────┼──────────┼─────────────────┤\n", "│ 1 │ 0 │ 0.0 │ 0.0 │ NaN │\n", "│ 2 │ 1 │ 10.0 │ -1.0 │ 10.0 │\n", "│ 3 │ 2 │ 9.275 │ -1.95 │ 0.95 │\n", "│ 4 │ 3 │ 8.47937 │ -2.8525 │ 0.9025 │\n", "│ 5 │ 4 │ 7.67277 │ -3.70987 │ 0.857375 │\n", "│ 6 │ 5 │ 6.88237 │ -4.52438 │ 0.814506 │\n", "│ 7 │ 6 │ 6.12005 │ -5.29816 │ 0.773781 │\n", "│ 8 │ 7 │ 5.39039 │ -6.03325 │ 0.735092 │\n", "│ 9 │ 8 │ 4.69464 │ -6.73159 │ 0.698337 │\n", "│ 10 │ 9 │ 4.03245 │ -7.39501 │ 0.66342 │\n", "│ 11 │ 10 │ 3.40278 │ -8.02526 │ 0.630249 │\n", "⋮\n", "│ 18 │ 80 │ -8.24112 │ -19.6697 │ 0.0173846 │\n", "│ 19 │ 90 │ -8.37366 │ -19.8022 │ 0.0104088 │\n", "│ 20 │ 100 │ -8.45302 │ -19.8816 │ 0.00623214 │\n", "│ 21 │ 110 │ -8.50053 │ -19.9291 │ 0.00373141 │\n", "│ 22 │ 120 │ -8.52898 │ -19.9576 │ 0.00223413 │\n", "│ 23 │ 130 │ -8.54601 │ -19.9746 │ 0.00133766 │\n", "│ 24 │ 140 │ -8.55621 │ -19.9848 │ 0.000800905 │\n", "│ 25 │ 150 │ -8.56232 │ -19.9909 │ 0.000479532 │\n", "│ 26 │ 160 │ -8.56597 │ -19.9945 │ 0.000287113 │\n", "│ 27 │ 161 │ -8.56625 │ -19.9948 │ 0.000272758 │\n", "│ 28 │ 162 │ -8.56651 │ -19.9951 │ 0.00025912 │\n", "│ 29 │ 163 │ -8.56675 │ -19.9953 │ 0.000246164 │" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "col_names = map(Symbol, [\"i\", \"v^i(1)\", \"v^i(2)\", \"‖v^i - v^(i-1)‖\", \"span(v^i - v^(i-1))\"])\n", "df = DataFrame(Any[0:num_reps-1, values[:, 1], values[:, 2], diffs, spans], col_names)\n", "\n", "display_nums = [i+1 for i in 0:9]\n", "append!(display_nums, [10*i+1 for i in 1:16])\n", "append!(display_nums, [160+i+1 for i in 1:3])\n", "df[display_nums, [1, 2, 3, 4]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, the span decreases faster than the norm;\n", "the following replicates Table 6.6.1, page 205:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
i‖v^i - v^(i-1)‖span(v^i - v^(i-1))
1110.011.0
220.950.22499999999999853
330.90250000000000010.10687500000000072
440.85737499999999980.050765624999999925
550.81450624999999950.024113671874999465
660.77378093749999980.011453994140625312
770.73509189062500010.0054406472167976005
880.698337296093750.0025843074279787714
990.66342043128906260.0012275460282902273
10100.6302494097246090.0005830843634369032
11110.59873693923837830.000276965072632418
12120.56880009227645980.00013155840950096476
13200.377353602535306363.40931783249232e-7
14300.225935540992566081.993445408743355e-10
15400.13527595427905581.1546319456101628e-13
16500.080994710817591910.0
17600.0484945252494242141.7763568394002505e-15
" ], "text/plain": [ "17×3 DataFrames.DataFrame\n", "│ Row │ i │ ‖v^i - v^(i-1)‖ │ span(v^i - v^(i-1)) │\n", "├─────┼────┼─────────────────┼─────────────────────┤\n", "│ 1 │ 1 │ 10.0 │ 11.0 │\n", "│ 2 │ 2 │ 0.95 │ 0.225 │\n", "│ 3 │ 3 │ 0.9025 │ 0.106875 │\n", "│ 4 │ 4 │ 0.857375 │ 0.0507656 │\n", "│ 5 │ 5 │ 0.814506 │ 0.0241137 │\n", "│ 6 │ 6 │ 0.773781 │ 0.011454 │\n", "│ 7 │ 7 │ 0.735092 │ 0.00544065 │\n", "│ 8 │ 8 │ 0.698337 │ 0.00258431 │\n", "│ 9 │ 9 │ 0.66342 │ 0.00122755 │\n", "│ 10 │ 10 │ 0.630249 │ 0.000583084 │\n", "│ 11 │ 11 │ 0.598737 │ 0.000276965 │\n", "│ 12 │ 12 │ 0.5688 │ 0.000131558 │\n", "│ 13 │ 20 │ 0.377354 │ 3.40932e-7 │\n", "│ 14 │ 30 │ 0.225936 │ 1.99345e-10 │\n", "│ 15 │ 40 │ 0.135276 │ 1.15463e-13 │\n", "│ 16 │ 50 │ 0.0809947 │ 0.0 │\n", "│ 17 │ 60 │ 0.0484945 │ 1.77636e-15 │" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display_nums = [i+1 for i in 1:12]\n", "append!(display_nums, [10*i+1 for i in 2:6])\n", "df[display_nums, [1, 4, 5]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The span-based termination criterion is satisfied when $i = 11$:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/html": [ "0.0005263157894736847" ], "text/plain": [ "0.0005263157894736847" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "epsilon * (1-beta) / beta" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "true" ], "text/plain": [ "true" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "spans[12] < epsilon * (1-beta) / beta" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In fact, modified policy iteration with $k = 0$ terminates with $11$ iterations:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "epsilon = 1e-2\n", "v_init = [0., 0.]\n", "k = 0\n", "res_mpi_1 = solve(ddp, v_init, MPFI, epsilon=epsilon, k=k);" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/html": [ "11" ], "text/plain": [ "11" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_mpi_1.num_iter" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Float64,1}:\n", " -8.56905\n", " -19.9974 " ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_mpi_1.v" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Policy iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If $\\{\\sigma^i\\}$ is the sequence of policies obtained by policy iteration\n", "with an initial policy $\\sigma^0$,\n", "one can show that $T^i v_{\\sigma^0} \\leq v_{\\sigma^i}$ ($\\leq v^*$),\n", "so that the number of iterations required for policy iteration is smaller than\n", "that for value iteration at least weakly,\n", "and indeed in many cases, the former is significantly smaller than the latter." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "v_init = [0., 0.]\n", "res_pi = solve(ddp, v_init, PFI);" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/html": [ "2" ], "text/plain": [ "2" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_pi.num_iter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Policy iteration returns the exact optimal value function (up to rounding errors):" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Float64,1}:\n", " -8.57143\n", " -20.0 " ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_pi.v" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/html": [ "3.552713678800501e-15" ], "text/plain": [ "3.552713678800501e-15" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "maximum(abs, res_pi.v - v_star(beta))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To look into the iterations:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Iterate 0\n", " value: [0.0, 0.0]\n", " policy: [2, 1]\n", "Iterate 1\n", " value: [-9.0, -20.0]\n", " policy: [1, 1]\n", "Iterate 2\n", " value: [-8.57143, -20.0]\n", " policy: [1, 1]\n", "Terminated\n" ] } ], "source": [ "v = [0., 0.]\n", "sigma = [0, 0] # Dummy\n", "sigma_new = compute_greedy(ddp, v)\n", "i = 0\n", "\n", "while true\n", " println(\"Iterate $i\")\n", " println(\" value:$v\")\n", " println(\" policy: $sigma_new\")\n", " if all(sigma_new .== sigma)\n", " break\n", " end\n", " copy!(sigma, sigma_new)\n", " v = evaluate_policy(ddp, sigma)\n", " sigma_new = compute_greedy(ddp, v)\n", " i += 1\n", "end\n", "\n", "println(\"Terminated\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "See Example 6.4.1, pp.176-177." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Modified policy iteration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The evaluation step in policy iteration\n", "which solves the linear equation$v = T_{\\sigma} v$\n", "to obtain the policy value$v_{\\sigma}$\n", "can be expensive for problems with a large number of states.\n", "Modified policy iteration is to reduce the cost of this step\n", "by using an approximation of$v_{\\sigma}$obtained by iteration of$T_{\\sigma}$.\n", "The tradeoff is that this approach only computes an$\\varepsilon$-optimal policy,\n", "and for small$\\varepsilon$, takes a larger number of iterations than policy iteration\n", "(but much smaller than value iteration)." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "epsilon = 1e-2\n", "v_init = [0., 0.]\n", "k = 6\n", "res_mpi = solve(ddp, v_init, MPFI, epsilon=epsilon, k=k);" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/html": [ "4" ], "text/plain": [ "4" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_mpi.num_iter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The returned value function:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "2-element Array{Float64,1}:\n", " -8.57137\n", " -19.9999 " ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "res_mpi.v" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is indeed an$\\varepsilon/2$-approximation of$v^*$:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/html": [ "true" ], "text/plain": [ "true" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "maximum(abs, res_mpi.v - v_star(beta)) < epsilon/2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To look into the iterations:" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# T_sigma operator\n", "function T_sigma{T<:Integer}(ddp::DiscreteDP, sigma::Array{T})\n", " R_sigma, Q_sigma = RQ_sigma(ddp, sigma)\n", " return v -> R_sigma + ddp.beta * Q_sigma * v\n", "end;" ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Iterate 0\n", " v: [0, 0]\n", "Iterate 1\n", " sigma: [2, 1]\n", " T_sigma(v): [10.0, -1.0]\n", " span: 11.0\n", " T_sigma^k+1(v): [4.96675, -6.03325]\n", "Iterate 2\n", " sigma: [1, 1]\n", " T_sigma(v): [4.49341, -6.73159]\n", " span: 0.22499999999999964\n", " T_sigma^k+1(v): [1.17973, -10.2465]\n", "Iterate 3\n", " sigma: [1, 1]\n", " T_sigma(v): [0.693285, -10.7342]\n", " span: 0.0012275460282902273\n", " T_sigma^k+1(v): [-1.76021, -13.1888]\n", "Iterate 4\n", " sigma: [1, 1]\n", " T_sigma(v): [-2.10076, -13.5293]\n", " span: 6.6971966727891186e-6\n", "Terminated\n", " sigma: [1, 1]\n", " v: [-8.57137, -19.9999]\n" ] } ], "source": [ "epsilon = 1e-2\n", "v = [0, 0]\n", "k = 6\n", "i = 0\n", "println(\"Iterate$i\")\n", "println(\" v: $v\")\n", "\n", "sigma = Vector{Int64}(n)\n", "u = Vector{Float64}(n)\n", "\n", "while true\n", " i += 1\n", " bellman_operator!(ddp, v, u, sigma) # u and sigma are modified in place\n", " diff = u - v\n", " span = maximum(diff) - minimum(diff)\n", " println(\"Iterate$i\")\n", " println(\" sigma: $sigma\")\n", " println(\" T_sigma(v):$u\")\n", " println(\" span: $span\")\n", " if span < epsilon * (1-ddp.beta) / ddp.beta\n", " v = u + ((maximum(diff) + minimum(diff)) / 2) *\n", " (ddp.beta / (1 - ddp.beta))\n", " break\n", " end\n", " \n", " v = compute_fixed_point(T_sigma(ddp, sigma), u,\n", " err_tol=0, max_iter=k, verbose=false)\n", " #The above is equivalent to the following:\n", " #for j in 1:k\n", " # v = T_sigma(ddp, sigma)(u)\n", " # copy!(u, v)\n", " #end\n", " #copy!(v, u)\n", " \n", " println(\" T_sigma^k+1(v):$v\")\n", "end\n", "\n", "println(\"Terminated\")\n", "println(\" sigma: $sigma\")\n", "println(\" v:$v\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compare this with the implementation with the norm-based termination rule\n", "as described in Example 6.5.1, pp.187-188." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Reference" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* M.L. Puterman,\n", " [*Markov Decision Processes: Discrete Stochastic Dynamic Programming*](http://onlinelibrary.wiley.com/book/10.1002/9780470316887),\n", " Wiley-Interscience, 2005." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Julia 0.6.0", "language": "julia", "name": "julia-0.6" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "0.6.0" } }, "nbformat": 4, "nbformat_minor": 1 }