{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Tools for Game Theory in QuantEcon.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Daisuke Oyama** \n", "*Faculty of Economics, University of Tokyo*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This notebook demonstrates the functionalities of the [`game_theory`](http://quanteconpy.readthedocs.io/en/latest/game_theory.html) module\n", "in [QuantEcon.py](https://github.com/QuantEcon/QuantEcon.py)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import numpy as np\n", "import quantecon.game_theory as gt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Normal Form Games" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An $N$-player *normal form game* is a triplet $g = (I, (A_i)_{i \\in I}, (u_i)_{i \\in I})$ where\n", "\n", "* $I = \\{0, \\ldots, N-1\\}$ is the set of players,\n", "* $A_i = \\{0, \\ldots, n_i-1\\}$ is the set of actions of player $i \\in I$, and\n", "* $u_i \\colon A_i \\times A_{i+1} \\times \\cdots \\times A_{i+N-1} \\to \\mathbb{R}$\n", " is the payoff function of player $i \\in I$,\n", "\n", "where $i+j$ is understood modulo $N$.\n", "\n", "Note that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is\n", "player $i$'s own action\n", "and the $j$-th argument, $j = 1, \\ldots, N-1$, is player ($i+j$)'s action (modulo $N$)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our module,\n", "a normal form game and a player are represented by\n", "the classes `NormalFormGame` and `Player`, respectively.\n", "\n", "A `Player` carries the player's payoff function and implements in particular\n", "a method that returns the best response action(s) given an action of the opponent player,\n", "or a profile of actions of the opponents if there are more than one.\n", "\n", "A `NormalFormGame` is in effect a container of `Player` instances." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating a `NormalFormGame`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several ways to create a `NormalFormGame` instance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first is to pass an array of payoffs for all the players, i.e.,\n", "an $(N+1)$-dimenstional array of shape $(n_0, \\ldots, n_{N-1}, N)$\n", "whose $(a_0, \\ldots, a_{N-1})$-entry contains an array of the $N$ payoff values\n", "for the action profile $(a_0, \\ldots, a_{N-1})$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As an example, consider the following game (\"**Matching Pennies**\"):\n", "\n", "$\n", "\\begin{bmatrix}\n", "1, -1 & -1, 1 \\\\\n", "-1, 1 & 1, -1\n", "\\end{bmatrix}\n", "$" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": true }, "outputs": [], "source": [ "matching_pennies_bimatrix = [[(1, -1), (-1, 1)],\n", " [(-1, 1), (1, -1)]]\n", "g_MP = gt.NormalFormGame(matching_pennies_bimatrix)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 1, -1], [-1, 1]],\n", " [[-1, 1], [ 1, -1]]]\n" ] } ], "source": [ "print(g_MP)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Player in a 2-player normal form game with payoff array:\n", "[[ 1, -1],\n", " [-1, 1]]\n" ] } ], "source": [ "print(g_MP.players[0]) # Player instance for player 0" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Player in a 2-player normal form game with payoff array:\n", "[[-1, 1],\n", " [ 1, -1]]\n" ] } ], "source": [ "print(g_MP.players[1]) # Player instance for player 1" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1, -1],\n", " [-1, 1]])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.players[0].payoff_array # Player 0's payoff array" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[-1, 1],\n", " [ 1, -1]])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.players[1].payoff_array # Player 1's payoff array" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([ 1, -1])" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP[0, 0] # payoff profile for action profile (0, 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If a square matrix (2-dimensional array) is given,\n", "then it is considered to be a symmetric two-player game.\n", "\n", "Consider the following game (symmetric $2 \\times 2$ \"**Coordination Game**\"):\n", "\n", "$\n", "\\begin{bmatrix}\n", "4, 4 & 0, 3 \\\\\n", "3, 0 & 2, 2\n", "\\end{bmatrix}\n", "$" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "coordination_game_matrix = [[4, 0],\n", " [3, 2]] # square matrix\n", "g_Coo = gt.NormalFormGame(coordination_game_matrix)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[4, 4], [0, 3]],\n", " [[3, 0], [2, 2]]]\n" ] } ], "source": [ "print(g_Coo)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[4, 0],\n", " [3, 2]])" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_Coo.players[0].payoff_array # Player 0's payoff array" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[4, 0],\n", " [3, 2]])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_Coo.players[1].payoff_array # Player 1's payoff array" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another example (\"**Rock-Paper-Scissors**\"):\n", "\n", "$\n", "\\begin{bmatrix}\n", " 0, 0 & -1, 1 & 1, -1 \\\\\n", " 1, -1 & 0, 0 & -1, 1 \\\\\n", "-1, 1 & 1, -1 & 0, 0\n", "\\end{bmatrix}\n", "$" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true }, "outputs": [], "source": [ "RPS_matrix = [[ 0, -1, 1],\n", " [ 1, 0, -1],\n", " [-1, 1, 0]]\n", "g_RPS = gt.NormalFormGame(RPS_matrix)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 0, 0], [-1, 1], [ 1, -1]],\n", " [[ 1, -1], [ 0, 0], [-1, 1]],\n", " [[-1, 1], [ 1, -1], [ 0, 0]]]\n" ] } ], "source": [ "print(g_RPS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The second is to specify the sizes of the action sets of the players\n", "to create a `NormalFormGame` instance filled with payoff zeros,\n", "and then set the payoff values to each entry.\n", "\n", "Let us construct the following game (\"**Prisoners' Dilemma**\"):\n", "\n", "$\n", "\\begin{bmatrix}\n", "1, 1 & -2, 3 \\\\\n", "3, -2 & 0, 0\n", "\\end{bmatrix}\n", "$" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [], "source": [ "g_PD = gt.NormalFormGame((2, 2)) # There are 2 players, each of whom has 2 actions\n", "g_PD[0, 0] = 1, 1\n", "g_PD[0, 1] = -2, 3\n", "g_PD[1, 0] = 3, -2\n", "g_PD[1, 1] = 0, 0" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 1., 1.], [-2., 3.]],\n", " [[ 3., -2.], [ 0., 0.]]]\n" ] } ], "source": [ "print(g_PD)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, a `NormalFormGame` instance can be constructed by giving an array of `Player` instances,\n", "as explained in the next section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating a `Player`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A `Player` instance is created by passing an array of dimension $N$\n", "that represents the player's payoff function (\"payoff array\").\n", "\n", "Consider the following game (a variant of \"**Battle of the Sexes**\"):\n", "\n", "$\n", "\\begin{bmatrix}\n", "3, 2 & 1, 1 \\\\\n", "0, 0 & 2, 3\n", "\\end{bmatrix}\n", "$" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "player0 = gt.Player([[3, 1],\n", " [0, 2]])\n", "player1 = gt.Player([[2, 0],\n", " [1, 3]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Beware that in `payoff_array[h, k]`, `h` refers to the player's own action,\n", "while `k` refers to the opponent player's action." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[3, 1],\n", " [0, 2]])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "player0.payoff_array" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[2, 0],\n", " [1, 3]])" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "player1.payoff_array" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Passing an array of Player instances is the third way to create a `NormalFormGame` instance:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": true }, "outputs": [], "source": [ "g_BoS = gt.NormalFormGame((player0, player1))" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[3, 2], [1, 1]],\n", " [[0, 0], [2, 3]]]\n" ] } ], "source": [ "print(g_BoS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### More than two players" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `game_theory` module also supports games with more than two players." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us consider the following version of $N$-player **Cournot Game**.\n", "\n", "There are $N$ firms (players) which produce a homogeneous good\n", "with common constant marginal cost $c \\geq 0$.\n", "Each firm $i$ simultaneously determines the quantity $q_i \\geq 0$ (action) of the good to produce.\n", "The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$,\n", "where $Q = q_0 + \\cdots + q_{N-1}$ is the aggregate supply.\n", "Then the profit (payoff) for firm $i$ is given by\n", "$$\n", "u_i(q_i, q_{i+1}, \\ldots, q_{i+N-1})\n", "= P(Q) q_i - c q_i\n", "= \\left(a - c - \\sum_{j \\neq i} q_j - q_i\\right) q_i.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Theoretically, the set of actions, i.e., available quantities, may be\n", "the set of all nonnegative real numbers $\\mathbb{R}_+$\n", "(or a bounded interval $[0, \\bar{q}]$ with some upper bound $\\bar{q}$),\n", "but for computation on a computer we have to discretize the action space\n", "and only allow for finitely many grid points.\n", "\n", "The following script creates a `NormalFormGame` instance of the Cournot game as described above,\n", "assuming that the (common) grid of possible quantity values is stored in an array `q_grid`." ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from quantecon import cartesian\n", "\n", "\n", "def cournot(a, c, N, q_grid):\n", " \"\"\"\n", " Create a `NormalFormGame` instance for the symmetric N-player Cournot game\n", " with linear inverse demand a - Q and constant marginal cost c.\n", "\n", " Parameters\n", " ----------\n", " a : scalar\n", " Intercept of the demand curve\n", "\n", " c : scalar\n", " Common constant marginal cost\n", "\n", " N : scalar(int)\n", " Number of firms\n", "\n", " q_grid : array_like(scalar)\n", " Array containing the set of possible quantities\n", "\n", " Returns\n", " -------\n", " NormalFormGame\n", " NormalFormGame instance representing the Cournot game\n", "\n", " \"\"\"\n", " q_grid = np.asarray(q_grid)\n", " payoff_array = \\\n", " cartesian([q_grid]*N).sum(axis=-1).reshape([len(q_grid)]*N) * (-1) + \\\n", " (a - c)\n", " payoff_array *= q_grid.reshape([len(q_grid)] + [1]*(N-1))\n", " payoff_array += 0 # To get rid of the minus sign of -0\n", "\n", " player = gt.Player(payoff_array)\n", " return gt.NormalFormGame([player for i in range(N)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a simple example with three firms,\n", "marginal cost $20$, and inverse demand function $80 - Q$,\n", "where the feasible quantity values are assumed to be $10$ and $15$." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": true }, "outputs": [], "source": [ "a, c = 80, 20\n", "N = 3\n", "q_grid = [10, 15] # [1/3 of Monopoly quantity, Nash equilibrium quantity]\n", "\n", "g_Cou = cournot(a, c, N, q_grid)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "3-player NormalFormGame with payoff profile array:\n", "[[[[300, 300, 300], [250, 250, 375]],\n", " [[250, 375, 250], [200, 300, 300]]],\n", "\n", " [[[375, 250, 250], [300, 200, 300]],\n", " [[300, 300, 200], [225, 225, 225]]]]\n" ] } ], "source": [ "print(g_Cou)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Player in a 3-player normal form game with payoff array:\n", "[[[300, 250],\n", " [250, 200]],\n", "\n", " [[375, 300],\n", " [300, 225]]]\n" ] } ], "source": [ "print(g_Cou.players[0])" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(2, 2, 2)" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_Cou.nums_actions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Nash Equilibrium" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A *Nash equilibrium* of a normal form game is a profile of actions\n", "where the action of each player is a best response to the others'." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `Player` object has a method `best_response`.\n", "\n", "Consider the Matching Pennies game `g_MP` defined above.\n", "For example, player 0's best response to the opponent's action 1 is:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.players[0].best_response(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Player 0's best responses to the opponent's mixed action `[0.5, 0.5]`\n", "(we know they are 0 and 1):" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# By default, returns the best response action with the smallest index\n", "g_MP.players[0].best_response([0.5, 0.5])" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# With tie_breaking='random', returns randomly one of the best responses\n", "g_MP.players[0].best_response([0.5, 0.5], tie_breaking='random') # Try several times" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([0, 1])" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# With tie_breaking=False, returns an array of all the best responses\n", "g_MP.players[0].best_response([0.5, 0.5], tie_breaking=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this game, we know that `([0.5, 0.5], [0.5, 0.5])` is a (unique) Nash equilibrium." ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.is_nash(([0.5, 0.5], [0.5, 0.5]))" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.is_nash((0, 0))" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 33, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_MP.is_nash((0, [0.5, 0.5]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Finding Nash equilibria" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are several algorithms implemented to compute Nash equilibria:\n", "\n", "* Brute force \n", " Find all pure-action Nash equilibria of an $N$-player game (if any).\n", "* Sequential best response \n", " Find one pure-action Nash equilibrium of an $N$-player game (if any).\n", "* Support enumeration \n", " Find all mixed-action Nash equilibria of a two-player nondegenerate game.\n", "* Vertex enumeration \n", " Find all mixed-action Nash equilibria of a two-player nondegenerate game.\n", "* Lemke-Howson \n", " Find one mixed-action Nash equilibrium of a two-player game.\n", "* McLennan-Tourky \n", " Find one mixed-action Nash equilibrium of an $N$-player game.\n", "\n", "For more variety of algorithms, one should look at [Gambit](http://www.gambit-project.org)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Brute force" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For small games, we can find pure action Nash equilibria by brute force,\n", "by calling the routine [`pure_nash_brute`](http://quanteconpy.readthedocs.io/en/latest/game_theory/pure_nash.html#quantecon.game_theory.pure_nash.pure_nash_brute).\n", "It visits all the action profiles and check whether each is a Nash equilibrium\n", "by the `is_nash` method." ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def print_pure_nash_brute(g):\n", " \"\"\"\n", " Print all pure Nash equilibria of a normal form game found by brute force.\n", " \n", " Parameters\n", " ----------\n", " g : NormalFormGame\n", " \n", " \"\"\"\n", " NEs = gt.pure_nash_brute(g)\n", " num_NEs = len(NEs)\n", " if num_NEs == 0:\n", " msg = 'no pure Nash equilibrium'\n", " elif num_NEs == 1:\n", " msg = '1 pure Nash equilibrium:\\n{0}'.format(NEs)\n", " else:\n", " msg = '{0} pure Nash equilibria:\\n{1}'.format(num_NEs, NEs)\n", "\n", " print('The game has ' + msg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Matching Pennies:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has no pure Nash equilibrium\n" ] } ], "source": [ "print_pure_nash_brute(g_MP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Coordination game:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has 2 pure Nash equilibria:\n", "[(0, 0), (1, 1)]\n" ] } ], "source": [ "print_pure_nash_brute(g_Coo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rock-Paper-Scissors:" ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has no pure Nash equilibrium\n" ] } ], "source": [ "print_pure_nash_brute(g_RPS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Battle of the Sexes:" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has 2 pure Nash equilibria:\n", "[(0, 0), (1, 1)]\n" ] } ], "source": [ "print_pure_nash_brute(g_BoS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Prisoners' Dillema:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has 1 pure Nash equilibrium:\n", "[(1, 1)]\n" ] } ], "source": [ "print_pure_nash_brute(g_PD)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Cournot game:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has 1 pure Nash equilibrium:\n", "[(1, 1, 1)]\n" ] } ], "source": [ "print_pure_nash_brute(g_Cou)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Sequential best response" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In some games, such as \"supermodular games\" and \"potential games\",\n", "the process of sequential best responses converges to a Nash equilibrium." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a script to find *one* pure Nash equilibrium by sequential best response, if it converges." ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def sequential_best_response(g, init_actions=None, tie_breaking='smallest',\n", " verbose=True):\n", " \"\"\"\n", " Find a pure Nash equilibrium of a normal form game by sequential best\n", " response.\n", "\n", " Parameters\n", " ----------\n", " g : NormalFormGame\n", "\n", " init_actions : array_like(int), optional(default=[0, ..., 0])\n", " The initial action profile.\n", "\n", " tie_breaking : {'smallest', 'random'}, optional(default='smallest')\n", "\n", " verbose: bool, optional(default=True)\n", " If True, print the intermediate process.\n", "\n", " \"\"\"\n", " N = g.N # Number of players\n", " a = np.empty(N, dtype=int) # Action profile\n", " if init_actions is None:\n", " init_actions = [0] * N\n", " a[:] = init_actions\n", "\n", " if verbose:\n", " print('init_actions: {0}'.format(a))\n", "\n", " new_a = np.empty(N, dtype=int)\n", " max_iter = np.prod(g.nums_actions)\n", "\n", " for t in range(max_iter):\n", " new_a[:] = a\n", " for i, player in enumerate(g.players):\n", " if N == 2:\n", " a_except_i = new_a[1-i]\n", " else:\n", " a_except_i = new_a[np.arange(i+1, i+N) % N]\n", " new_a[i] = player.best_response(a_except_i,\n", " tie_breaking=tie_breaking)\n", " if verbose:\n", " print('player {0}: {1}'.format(i, new_a))\n", " if np.array_equal(new_a, a):\n", " return a\n", " else:\n", " a[:] = new_a\n", "\n", " print('No pure Nash equilibrium found')\n", " return None" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A Cournot game with linear demand is known to be a potential game,\n", "for which sequential best response converges to a Nash equilibrium.\n", "\n", "Let us try a bigger instance:" ] }, { "cell_type": "code", "execution_count": 42, "metadata": { "collapsed": true }, "outputs": [], "source": [ "a, c = 80, 20\n", "N = 3\n", "q_grid = np.linspace(0, a-c, 13) # [0, 5, 10, ..., 60]\n", "g_Cou = cournot(a, c, N, q_grid)" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "init_actions: [0 0 0]\n", "player 0: [6 0 0]\n", "player 1: [6 3 0]\n", "player 2: [6 3 1]\n", "player 0: [4 3 1]\n", "player 1: [4 3 1]\n", "player 2: [4 3 2]\n", "player 0: [3 3 2]\n", "player 1: [3 3 2]\n", "player 2: [3 3 3]\n", "player 0: [3 3 3]\n", "player 1: [3 3 3]\n", "player 2: [3 3 3]\n", "Nash equilibrium indices: [3 3 3]\n", "Nash equilibrium quantities: [ 15. 15. 15.]\n" ] } ], "source": [ "a_star = sequential_best_response(g_Cou) # By default, start with (0, 0, 0)\n", "print('Nash equilibrium indices: {0}'.format(a_star))\n", "print('Nash equilibrium quantities: {0}'.format(q_grid[a_star]))" ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "init_actions: [12 12 12]\n", "player 0: [ 0 12 12]\n", "player 1: [ 0 0 12]\n", "player 2: [0 0 6]\n", "player 0: [3 0 6]\n", "player 1: [3 1 6]\n", "player 2: [3 1 4]\n", "player 0: [3 1 4]\n", "player 1: [3 2 4]\n", "player 2: [3 2 3]\n", "player 0: [3 2 3]\n", "player 1: [3 3 3]\n", "player 2: [3 3 3]\n", "player 0: [3 3 3]\n", "player 1: [3 3 3]\n", "player 2: [3 3 3]\n" ] }, { "data": { "text/plain": [ "array([3, 3, 3])" ] }, "execution_count": 44, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Start with the largest actions (12, 12, 12)\n", "sequential_best_response(g_Cou, init_actions=(12, 12, 12))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The limit action profile is indeed a Nash equilibrium:" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_Cou.is_nash(a_star)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In fact, the game has other Nash equilibria\n", "(because of our choice of grid points and parameter values):" ] }, { "cell_type": "code", "execution_count": 46, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The game has 7 pure Nash equilibria:\n", "[(2, 3, 4), (2, 4, 3), (3, 2, 4), (3, 3, 3), (3, 4, 2), (4, 2, 3), (4, 3, 2)]\n" ] } ], "source": [ "print_pure_nash_brute(g_Cou)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Make it bigger:" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": true }, "outputs": [], "source": [ "N = 4\n", "q_grid = np.linspace(0, a-c, 61) # [0, 1, 2, ..., 60]\n", "g_Cou = cournot(a, c, N, q_grid)" ] }, { "cell_type": "code", "execution_count": 48, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "init_actions: [0 0 0 0]\n", "player 0: [30 0 0 0]\n", "player 1: [30 15 0 0]\n", "player 2: [30 15 7 0]\n", "player 3: [30 15 7 4]\n", "player 0: [17 15 7 4]\n", "player 1: [17 16 7 4]\n", "player 2: [17 16 11 4]\n", "player 3: [17 16 11 8]\n", "player 0: [12 16 11 8]\n", "player 1: [12 14 11 8]\n", "player 2: [12 14 13 8]\n", "player 3: [12 14 13 10]\n", "player 0: [11 14 13 10]\n", "player 1: [11 13 13 10]\n", "player 2: [11 13 13 10]\n", "player 3: [11 13 13 11]\n", "player 0: [11 13 13 11]\n", "player 1: [11 12 13 11]\n", "player 2: [11 12 13 11]\n", "player 3: [11 12 13 12]\n", "player 0: [11 12 13 12]\n", "player 1: [11 12 13 12]\n", "player 2: [11 12 12 12]\n", "player 3: [11 12 12 12]\n", "player 0: [12 12 12 12]\n", "player 1: [12 12 12 12]\n", "player 2: [12 12 12 12]\n", "player 3: [12 12 12 12]\n", "player 0: [12 12 12 12]\n", "player 1: [12 12 12 12]\n", "player 2: [12 12 12 12]\n", "player 3: [12 12 12 12]\n" ] }, { "data": { "text/plain": [ "array([12, 12, 12, 12])" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sequential_best_response(g_Cou)" ] }, { "cell_type": "code", "execution_count": 49, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "init_actions: [ 0 0 0 30]\n", "player 0: [15 0 0 30]\n", "player 1: [15 7 0 30]\n", "player 2: [15 7 4 30]\n", "player 3: [15 7 4 17]\n", "player 0: [16 7 4 17]\n", "player 1: [16 11 4 17]\n", "player 2: [16 11 8 17]\n", "player 3: [16 11 8 12]\n", "player 0: [14 11 8 12]\n", "player 1: [14 13 8 12]\n", "player 2: [14 13 10 12]\n", "player 3: [14 13 10 11]\n", "player 0: [13 13 10 11]\n", "player 1: [13 13 10 11]\n", "player 2: [13 13 11 11]\n", "player 3: [13 13 11 11]\n", "player 0: [12 13 11 11]\n", "player 1: [12 13 11 11]\n", "player 2: [12 13 12 11]\n", "player 3: [12 13 12 11]\n", "player 0: [12 13 12 11]\n", "player 1: [12 12 12 11]\n", "player 2: [12 12 12 11]\n", "player 3: [12 12 12 12]\n", "player 0: [12 12 12 12]\n", "player 1: [12 12 12 12]\n", "player 2: [12 12 12 12]\n", "player 3: [12 12 12 12]\n" ] }, { "data": { "text/plain": [ "array([12, 12, 12, 12])" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sequential_best_response(g_Cou, init_actions=(0, 0, 0, 30))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Sequential best response does not converge in all games:" ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 1, -1], [-1, 1]],\n", " [[-1, 1], [ 1, -1]]]\n" ] } ], "source": [ "print(g_MP) # Matching Pennies" ] }, { "cell_type": "code", "execution_count": 51, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "init_actions: [0 0]\n", "player 0: [0 0]\n", "player 1: [0 1]\n", "player 0: [1 1]\n", "player 1: [1 0]\n", "player 0: [0 0]\n", "player 1: [0 1]\n", "player 0: [1 1]\n", "player 1: [1 0]\n", "No pure Nash equilibrium found\n" ] } ], "source": [ "sequential_best_response(g_MP)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "#### Support enumeration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The routine [`support_enumeration`](http://quanteconpy.readthedocs.io/en/latest/game_theory/support_enumeration.html#quantecon.game_theory.support_enumeration.support_enumeration),\n", "which is for two-player games,\n", "visits all equal-size support pairs and\n", "checks whether each pair has a Nash equilibrium (in mixed actions)\n", "by the indifference condition.\n", "(This should thus be used only for small games.)\n", "For nondegenerate games, this routine returns all the Nash equilibria." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Matching Pennies:" ] }, { "cell_type": "code", "execution_count": 52, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.5, 0.5]), array([ 0.5, 0.5]))]" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.support_enumeration(g_MP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output list contains a pair of mixed actions as a tuple of two NumPy arrays,\n", "which constitues the unique Nash equilibria of this game." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Coordination game:" ] }, { "cell_type": "code", "execution_count": 53, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[4, 4], [0, 3]],\n", " [[3, 0], [2, 2]]]\n" ] } ], "source": [ "print(g_Coo)" ] }, { "cell_type": "code", "execution_count": 54, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 1., 0.]), array([ 1., 0.])),\n", " (array([ 0., 1.]), array([ 0., 1.])),\n", " (array([ 0.66666667, 0.33333333]), array([ 0.66666667, 0.33333333]))]" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.support_enumeration(g_Coo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The output contains three tuples of mixed actions,\n", "where the first two correspond to the two pure action equilibria,\n", "while the last to the unique totally mixed action equilibrium." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Rock-Paper-Scissors:" ] }, { "cell_type": "code", "execution_count": 55, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 0, 0], [-1, 1], [ 1, -1]],\n", " [[ 1, -1], [ 0, 0], [-1, 1]],\n", " [[-1, 1], [ 1, -1], [ 0, 0]]]\n" ] } ], "source": [ "print(g_RPS)" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.33333333, 0.33333333, 0.33333333]),\n", " array([ 0.33333333, 0.33333333, 0.33333333]))]" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.support_enumeration(g_RPS)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider the $6 \\times 6$ game by\n", "von Stengel (1997), page 12:" ] }, { "cell_type": "code", "execution_count": 57, "metadata": { "collapsed": true }, "outputs": [], "source": [ "player0 = gt.Player(\n", " [[ 9504, -660, 19976, -20526, 1776, -8976],\n", " [ -111771, 31680, -130944, 168124, -8514, 52764],\n", " [ 397584, -113850, 451176, -586476, 29216, -178761],\n", " [ 171204, -45936, 208626, -263076, 14124, -84436],\n", " [ 1303104, -453420, 1227336, -1718376, 72336, -461736],\n", " [ 737154, -227040, 774576, -1039236, 48081, -300036]]\n", ")\n", "\n", "player1 = gt.Player(\n", " [[ 72336, -461736, 1227336, -1718376, 1303104, -453420],\n", " [ 48081, -300036, 774576, -1039236, 737154, -227040],\n", " [ 29216, -178761, 451176, -586476, 397584, -113850],\n", " [ 14124, -84436, 208626, -263076, 171204, -45936],\n", " [ 1776, -8976, 19976, -20526, 9504, -660],\n", " [ -8514, 52764, -130944, 168124, -111771, 31680]]\n", ")\n", "\n", "g_vonStengel = gt.NormalFormGame((player0, player1))" ] }, { "cell_type": "code", "execution_count": 58, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "75" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(gt.support_enumeration(g_vonStengel))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the $n \\times n$ game where the payoff matrices are given by the identy matrix\n", "has $2^n−1$ equilibria.\n", "It had been conjectured that this is the maximum number of equilibria of\n", "any nondegenerate $n \\times n$ game.\n", "The game above, the number of whose equilibria is $75 > 2^6 - 1 = 63$,\n", "was presented as a counter-example to this conjecture." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let us study the **All-Pay Acution**,\n", "where, unlike standard auctions,\n", "bidders pay their bids regardless of whether or not they win.\n", "Situations modeled as all-pay auctions include\n", "job promotion, R&D, and rent seeking competitions, among others.\n", "\n", "Here we consider a version of All-Pay Auction with complete information,\n", "symmetric bidders, discrete bids, bid caps, and \"full dissipation\"\n", "(where the prize is materialized if and only if\n", "there is only one bidder who makes a highest bid)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Specifically, each of $N$ players simultaneously bids an integer from $\\{0, 1, \\ldots, c\\}$,\n", "where $c$ is the common (integer) bid cap.\n", "If player $i$'s bid is higher than all the other players',\n", "then he receives the prize, whose value is $r$, common to all players,\n", "and pays his bid $b_i$.\n", "Otherwise, he pays $b_i$ and receives nothing (zero value).\n", "In particular, if there are more than one players who make the highest bid,\n", "the prize gets *fully dissipated* and all the players receive nothing.\n", "Thus, player $i$'s payoff function is\n", "$$\n", "u_i(b_i, b_{i+1}, \\ldots, b_{i+N-1}) =\n", "\\begin{cases}\n", "r - b_i & \\text{if $b_i > b_j$ for all $j \\neq i$}, \\\\ - b_i & \\text{otherwise}.\n", "\\end{cases}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following is a script to construct a `NormalFormGame` instance\n", "for the All-Pay Auction game,\n", "where we use [Numba](https://lectures.quantecon.org/py/need_for_speed.html#numba)\n", "to speed up the loops:" ] }, { "cell_type": "code", "execution_count": 59, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from numba import jit\n", "\n", "\n", "def all_pay_auction(r, c, N, dissipation=True):\n", " \"\"\"\n", " Create a `NormalFormGame` instance for the symmetric N-player\n", " All-Pay Auction game with common reward `r` and common bid cap `e`.\n", "\n", " Parameters\n", " ----------\n", " r : scalar(float)\n", " Common reward value.\n", "\n", " c : scalar(int)\n", " Common bid cap.\n", "\n", " N : scalar(int)\n", " Number of players.\n", "\n", " dissipation : bool, optional(default=True)\n", " If True, the prize fully dissipates in case of a tie. If False,\n", " the prize is equally split among the highest bidders (or given\n", " to one of the highest bidders with equal probabilities).\n", "\n", " Returns\n", " -------\n", " NormalFormGame\n", " NormalFormGame instance representing the All-Pay Auction game.\n", "\n", " \"\"\"\n", " player = gt.Player(np.empty((c+1,)*N))\n", " populate_APA_payoff_array(r, dissipation, player.payoff_array)\n", " return gt.NormalFormGame((player,)*N)\n", "\n", "\n", "@jit(nopython=True)\n", "def populate_APA_payoff_array(r, dissipation, out):\n", " \"\"\"\n", " Populate the payoff array for a player in an N-player All-Pay\n", " Auction game.\n", "\n", " Parameters\n", " ----------\n", " r : scalar(float)\n", " Reward value.\n", "\n", " dissipation : bool, optional(default=True)\n", " If True, the prize fully dissipates in case of a tie. If False,\n", " the prize is equally split among the highest bidders (or given\n", " to one of the highest bidders with equal probabilities).\n", "\n", " out : ndarray(float, ndim=N)\n", " NumPy array to store the payoffs. Modified in place.\n", "\n", " Returns\n", " -------\n", " out : ndarray(float, ndim=N)\n", " View of `out`.\n", "\n", " \"\"\"\n", " nums_actions = out.shape\n", " N = out.ndim\n", " for bids in np.ndindex(nums_actions):\n", " out[bids] = -bids[0]\n", " num_ties = 1\n", " for j in range(1, N):\n", " if bids[j] > bids[0]:\n", " num_ties = 0\n", " break\n", " elif bids[j] == bids[0]:\n", " if dissipation:\n", " num_ties = 0\n", " break\n", " else:\n", " num_ties += 1\n", " if num_ties > 0:\n", " out[bids] += r / num_ties\n", " return out" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider the two-player case with the following parameter values:" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "collapsed": true }, "outputs": [], "source": [ "N = 2\n", "c = 5 # odd\n", "r = 8" ] }, { "cell_type": "code", "execution_count": 61, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2-player NormalFormGame with payoff profile array:\n", "[[[ 0., 0.], [ 0., 7.], [ 0., 6.], [ 0., 5.], [ 0., 4.], [ 0., 3.]],\n", " [[ 7., 0.], [-1., -1.], [-1., 6.], [-1., 5.], [-1., 4.], [-1., 3.]],\n", " [[ 6., 0.], [ 6., -1.], [-2., -2.], [-2., 5.], [-2., 4.], [-2., 3.]],\n", " [[ 5., 0.], [ 5., -1.], [ 5., -2.], [-3., -3.], [-3., 4.], [-3., 3.]],\n", " [[ 4., 0.], [ 4., -1.], [ 4., -2.], [ 4., -3.], [-4., -4.], [-4., 3.]],\n", " [[ 3., 0.], [ 3., -1.], [ 3., -2.], [ 3., -3.], [ 3., -4.], [-5., -5.]]]\n" ] } ], "source": [ "g_APA_odd = all_pay_auction(r, c, N)\n", "print(g_APA_odd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Clearly, this game has no pure-action Nash equilibrium.\n", "Indeed:" ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.pure_nash_brute(g_APA_odd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As pointed out by Dechenaux et al. (2006),\n", "there are three Nash equilibria when the bid cap `c` is odd\n", "(so that there are an even number of actions for each player):" ] }, { "cell_type": "code", "execution_count": 63, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ]),\n", " array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ])),\n", " (array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ]),\n", " array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ])),\n", " (array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.375]),\n", " array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.375]))]" ] }, "execution_count": 63, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.support_enumeration(g_APA_odd)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition to a symmetric, totally mixed equilibrium (the third),\n", "there are two asymmetric, \"alternating\" equilibria (the first and the second)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If `c` is even, there is a unique equilibrium, which is symmetric and totally mixed.\n", "For example:" ] }, { "cell_type": "code", "execution_count": 64, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25 ]),\n", " array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25 ]))]" ] }, "execution_count": 64, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c = 6 # even\n", "g_APA_even = all_pay_auction(r, c, N)\n", "gt.support_enumeration(g_APA_even)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Vertex enumeration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The routine [`vertex_enumeration`](http://quanteconpy.readthedocs.io/en/latest/game_theory/vertex_enumeration.html#quantecon.game_theory.vertex_enumeration.vertex_enumeration)\n", "computes mixed-action Nash equilibria of a 2-player normal form game\n", "by enumeration and matching of vertices of the best response polytopes.\n", "For a non-degenerate game input, these are all the Nash equilibria.\n", "\n", "Internally,\n", "[`scipy.spatial.ConvexHull`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html)\n", "is used to compute vertex enumeration of the best response polytopes,\n", "or equivalently, facet enumeration of their polar polytopes.\n", "Then, for each vertex of the polytope for player 0,\n", "vertices of the polytope for player 1 are searched to find a completely labeled pair." ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.5, 0.5]), array([ 0.5, 0.5]))]" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.vertex_enumeration(g_MP)" ] }, { "cell_type": "code", "execution_count": 66, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "75" ] }, "execution_count": 66, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(gt.support_enumeration(g_vonStengel))" ] }, { "cell_type": "code", "execution_count": 67, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ]),\n", " array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ])),\n", " (array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ]),\n", " array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ])),\n", " (array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.375]),\n", " array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.375]))]" ] }, "execution_count": 67, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.vertex_enumeration(g_APA_odd)" ] }, { "cell_type": "code", "execution_count": 68, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25 ]),\n", " array([ 0.125, 0.125, 0.125, 0.125, 0.125, 0.125, 0.25 ]))]" ] }, "execution_count": 68, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.vertex_enumeration(g_APA_even)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`support_enumeration` and `vertex_enumeration` the same functionality\n", "(i.e., enumeration of Nash equilibria of a two-player game),\n", "but the latter seems to run faster than the former." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Lemke-Howson" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The routine [`lemke_howson`](http://quanteconpy.readthedocs.io/en/latest/game_theory/lemke_howson.html#quantecon.game_theory.lemke_howson.lemke_howson)\n", "implements the Lemke-Howson algorithm (Lemke and Howson 1964),\n", "which returns one Nash equilibrium of a two-player normal form game.\n", "For the details of the algorithm, see, e.g., von Stengel (2007)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Matching Pennies:" ] }, { "cell_type": "code", "execution_count": 69, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "(array([ 0.5, 0.5]), array([ 0.5, 0.5]))" ] }, "execution_count": 69, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_MP)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Coordination game:" ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 1., 0.]), array([ 1., 0.]))" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_Coo)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The initial pivot can be specified by `init_pivot`,\n", "which should be an integer $k$ such that $0 \\leq k \\leq n_1 + n_2 - 1$ (default to `0`),\n", "where $0, \\ldots, n_1-1$ correspond to player 0's actions,\n", "while $n_1 \\ldots, n_1+n_2-1$ to player 1's actions." ] }, { "cell_type": "code", "execution_count": 71, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0., 1.]), array([ 0., 1.]))" ] }, "execution_count": 71, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_Coo, init_pivot=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All-Pay Auction:" ] }, { "cell_type": "code", "execution_count": 72, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ]),\n", " array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ]))" ] }, "execution_count": 72, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_APA_odd, init_pivot=0)" ] }, { "cell_type": "code", "execution_count": 73, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ]),\n", " array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ]))" ] }, "execution_count": 73, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_APA_odd, init_pivot=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Additional information is returned if the option `full_output` is set `True`:" ] }, { "cell_type": "code", "execution_count": 74, "metadata": {}, "outputs": [ { "data": { "text/plain": [ " NE: (array([ 0.5 , 0. , 0.25, 0. , 0.25, 0. ]), array([ 0. , 0.25, 0. , 0.25, 0. , 0.5 ]))\n", " converged: True\n", " init: 0\n", " max_iter: 1000000\n", " num_iter: 6" ] }, "execution_count": 74, "metadata": {}, "output_type": "execute_result" } ], "source": [ "NE, res = gt.lemke_howson(g_APA_odd, full_output=True)\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`lemke_howson` runs fast,\n", "with a reasonable time amount for games with up to several hundreds actions.\n", "(In fact, this is the only routine among the Nash equilibrium computation routines in the `game_theory` submodule\n", "that scales to large-size games.)\n", "For example:" ] }, { "cell_type": "code", "execution_count": 75, "metadata": { "collapsed": true }, "outputs": [], "source": [ "N = 2\n", "c = 200 # 201 actions\n", "r = 500\n", "g_APA200 = all_pay_auction(r, c, N)" ] }, { "cell_type": "code", "execution_count": 76, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.6 ]),\n", " array([ 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002,\n", " 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.002, 0.6 ]))" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "gt.lemke_howson(g_APA200)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "#### McLennan-Tourky" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The routine [`mclennan_tourky`](http://quanteconpy.readthedocs.io/en/latest/game_theory/mclennan_tourky.html#quantecon.game_theory.mclennan_tourky.mclennan_tourky)\n", "computes one approximate Nash equilibrium of an $N$-player normal form game\n", "by the fixed-point algorithm of McLennan and Tourky (2006) applied to the best response correspondence" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider the symmetric All-Pay Auction with full dissipation as above, but this time with three players:" ] }, { "cell_type": "code", "execution_count": 77, "metadata": { "collapsed": true }, "outputs": [], "source": [ "N = 3\n", "r = 16\n", "c = 5\n", "g_APA3 = all_pay_auction(r, c, N)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run `mclennan_tourky`:" ] }, { "cell_type": "code", "execution_count": 78, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]),\n", " array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]),\n", " array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]))" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "NE = gt.mclennan_tourky(g_APA3)\n", "NE" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This output is an $\\varepsilon$-Nash equilibrium of the game,\n", "which is a profile of mixed actions $(x^*_0, \\ldots, x^*_{N-1})$ such that\n", "for all $i$, $u_i(x^*_i, x^*_{-i}) \\geq u_i(x_i, x^*_{-i}) - \\varepsilon$ for all $x_i$,\n", "where the value of $\\varepsilon$ is specified by the option `epsilon` (default to `1e-3`)." ] }, { "cell_type": "code", "execution_count": 79, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 79, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_APA3.is_nash(NE, tol=1e-3)" ] }, { "cell_type": "code", "execution_count": 80, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0.2500157 , 0.1035519 , 0.07944073, 0.06699354, 0.05902238,\n", " 0.44097575]),\n", " array([ 0.2500157 , 0.1035519 , 0.07944073, 0.06699354, 0.05902238,\n", " 0.44097575]),\n", " array([ 0.2500157 , 0.1035519 , 0.07944073, 0.06699354, 0.05902238,\n", " 0.44097575]))" ] }, "execution_count": 80, "metadata": {}, "output_type": "execute_result" } ], "source": [ "epsilon = 1e-4\n", "NE = gt.mclennan_tourky(g_APA3, epsilon=epsilon)\n", "NE" ] }, { "cell_type": "code", "execution_count": 81, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 81, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_APA3.is_nash(NE, tol=epsilon)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Additional information is returned by setting the `full_output` option to `True`:" ] }, { "cell_type": "code", "execution_count": 82, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ " NE: (array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]), array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]), array([ 0.24986561, 0.10338418, 0.07959071, 0.06705212, 0.05909946,\n", " 0.44100792]))\n", " converged: True\n", " epsilon: 0.001\n", " init: (0, 0, 0)\n", " max_iter: 200\n", " num_iter: 127" ] }, "execution_count": 82, "metadata": {}, "output_type": "execute_result" } ], "source": [ "NE, res = gt.mclennan_tourky(g_APA3, full_output=True)\n", "res" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For this game, `mclennan_tourky` returned a symmetric, totally mixed action profile\n", "(cf. Rapoport and Amaldoss 2004)\n", "with the default initial condition `(0, 0, 0)` (profile of pure actions).\n", "\n", "Let's try a different initial condition:" ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([ 0.86582198, 0. , 0.06955033, 0. , 0.06462769, 0. ]),\n", " array([ 0.86582198, 0. , 0.06955033, 0. , 0.06462769, 0. ]),\n", " array([ 0. , 0.14389389, 0. , 0.12286532, 0. ,\n", " 0.73324079]))" ] }, "execution_count": 83, "metadata": {}, "output_type": "execute_result" } ], "source": [ "init = (\n", " [1/2, 0, 0, 1/2, 0, 0], [0, 1/2, 0, 0, 1/2, 0], [0, 0, 0, 0, 0, 1]\n", ") # profile of mixed actions\n", "gt.mclennan_tourky(g_APA3, init=init)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We obtained an asymetric \"alternating\" mixed action profile.\n", "While this is just an approximate Nash equilibrium,\n", "it suggests that there is an (exact) Nash equilibrium of the form\n", "$((p_0, 0, p_2, 0, p_4, 0), (p_0, 0, p_2, 0, p_4, 0), (0, q_1, 0, q_3, 0, q_5))$.\n", "In fact, a simple calculation shows that there is one such that\n", "$$\n", "p_0 = \\left(\\frac{r-4}{r}\\right)^{\\frac{1}{2}},\n", "p_0 + p_2 = \\left(\\frac{r-2}{r}\\right)^{\\frac{1}{2}},\n", "p_4 = 1 - (p_0 + p_2),\n", "$$\n", "and\n", "$$\n", "q_1 = \\frac{2}{r p_0},\n", "q_1 + q_3 = \\frac{4}{r (p_0+p_2)},\n", "q_5 = 1 - (q_1 + q_3).\n", "$$\n", "\n", "To verify:" ] }, { "cell_type": "code", "execution_count": 84, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "([0.8660254037844386, 0, 0.06938894290904674, 0, 0.06458565330651467, 0],\n", " [0.8660254037844386, 0, 0.06938894290904674, 0, 0.06458565330651467, 0],\n", " [0, 0.14433756729740646, 0, 0.12292367461501794, 0, 0.7327387580875756])" ] }, "execution_count": 84, "metadata": {}, "output_type": "execute_result" } ], "source": [ "p0 = ((r-4)/r)**(1/2)\n", "p02 = ((r-2)/r)**(1/2)\n", "p2 = p02 - p0\n", "p4 = 1 - p02\n", "q1 = (2/r)/p0\n", "q13 = (4/r)/(p02)\n", "q3 = q13 - q1\n", "q5 = 1 - q13\n", "a = ([p0, 0, p2, 0, p4, 0], [p0, 0, p2, 0, p4, 0], [0, q1, 0, q3, 0, q5])\n", "a" ] }, { "cell_type": "code", "execution_count": 85, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 85, "metadata": {}, "output_type": "execute_result" } ], "source": [ "g_APA3.is_nash(a)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## References\n", "\n", "* E. Dechenaux, D. Kovenock, and V. Lugovskyy (2006),\n", " \"Caps on bidding in all-pay auctions:\n", " Comments on the experiments of A. Rapoport and W. Amaldoss,\"\n", " Journal of Economic Behavior and Organization 61, 276-283.\n", "\n", "* C. E. Lemke and J. T. Howson (1964),\n", " \"Equilibrium Points of Bimatrix Games,\"\n", " Journal of the Society for Industrial and Applied Mathematics 12, 413-423.\n", "\n", "* A. McLennan and R. Tourky (2006),\n", " \"[From Imitation Games to Kakutani](http://cupid.economics.uq.edu.au/mclennan/Papers/kakutani60.pdf).\"\n", "\n", "* A. Rapoport and W. Amaldoss (2004),\n", " \"Mixed-strategy play in single-stage first-price all-pay auctions with symmetric players,\"\n", " Journal of Economic Behavior and Organization 54, 585-607.\n", "\n", "* B. von Stengel (1997),\n", " \"[New Lower Bounds for the Number of Equilibria in Bimatrix Games](http://www.maths.lse.ac.uk/personal/stengel/TEXTE/264.pdf).\"\n", "\n", "* B. von Stengel (2007),\n", " \"[Equilibrium Computation for Two-Player Games in Strategic and Extensive Form](http://www.maths.lse.ac.uk/personal/stengel/TEXTE/agt-stengel.pdf),\"\n", " Chapter 3, N. Nisan, T. Roughgarden, E. Tardos, and V. Vazirani eds., Algorithmic Game Theory." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 1 }