{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#
Block 9: estimation of matching models
\n", "###
Alfred Galichon (NYU & Sciences Po)
\n", "##
'math+econ+code' masterclass on optimal transport and economic applications
\n", "
© 2018-2021 by Alfred Galichon. Past and present support from NSF grant DMS-1716489, ERC grant CoG-866274, and contributions by Jules Baudet, Pauline Corblet, Gregory Dannay, and James Nesbit are acknowledged.
\n", "\n", "####
With python code examples
\n", "\n", "**If you reuse material from this masterclass, please cite as:**
\n", "Alfred Galichon, 'math+econ+code' masterclass on optimal transport and economic applications, January 2021. https://github.com/math-econ-code/mec_optim" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Learning objectives\n", "\n", "* Matching with unobserved heterogeneities\n", "\n", "* Estimation of matching models\n", "\n", "* Generalized linear models" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "\n", "**[B]** Becker (1973). 'A Theory of Marriage: Part 1.' *Journal of Political Economy*.\n", "\n", "**[CS]** Choo and Siow (2006). 'Who Marries Whom and Why'. *Journal of Political Economy*.\n", "\n", "**[MN]** McCullagh and Nelder (1989). *Generalized Linear Models, Second Edition*. Chapman and Hall/CRC. \n", "\n", "\n", "**[COQ]** Chiappori, Oreffice and Quintana-Domeque (2012). 'Fatter Attraction: Anthropometric and Socioeconomic Matching on the Marriage Market'. *Journal of Political Economy*.\n", "\n", "\n", "**[CSW]** Chiappori, Salanié, and Weiss (2017). 'Partner Choice and the Marital College Premium'. * American Economic Review*.\n", "\n", "**[DG]** Dupuy and Galichon (2014). 'Personality traits and the marriage market'. *Journal of Political Economy*.\n", "\n", "**[GS]** Galichon and Salanié (2020). 'Cupid's Invisible Hand: Social Surplus and Identification in Matching Models'. Preprint (first version 2011).\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Motivation: models of matching since Gary Becker\n", "\n", "* In the footsteps of Becker, empirical studies on the marriage market had long been focused on one-dimensional models, which assumes that a single index is enough to capture the interactions on the marriage market, and positive assortative matching (PAM), which predicts that the matching equilibrium will tend to match the agents with higher indices with each other.\n", "\n", "* However, it is desirable to move beyond PAM:\n", " * PAM is always loosely true, never precisely\n", " * there are often many observed characteristics, and it is not always the case that the sorting can be captured by a single-dimensional model\n", " * PAM is a theoretical prediction stemming from assumptions of supermodularity of the surplus function which do not necessarly hold\n", " * optimal transport provide tools to study multidimensional models\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* However, any model of matching based on (unregularized) optimal transport will not be exploitable because it will generate far too strong predictions, namely that some matchings will never hold. This is rather counterfactual: in the data, one observes virtually any combination of type.\n", "\n", "* Hence, need to regularize the matching model, and we shall do so by introducing unobserved heterogeneity. The model so obtained will be exploitable for estimation and identification purposes. The first such model (with transfers) is the model by [CS]. We shall see a generalization of this model by [GS] (2015).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading our libraries\n", "\n", "We start with loading the libraries we will need. They are rather standard." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np\n", "from scipy import optimize\n", "# !python -m pip install -i https://pypi.gurobi.com gurobipy ## only if Gurobi not here\n", "import scipy.sparse as spr\n", "import gurobipy as grb\n", "\n", "from sklearn import linear_model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A look at our data\n", "\n", "* Our data are Choo and Siow's original data. Choo and Siow wanted to study the impact of the legalization of abortion by the Roe vs. Wade decision by the supreme court on the 'value of marriage'. Roe vs. Wade decreased the role of marriage in covering out-of-the-wedlocks pregnancies ('shotgun weddings').\n", "\n", "* The decision did, however, not make a change uniformly in the United\n", "States as a number of states had already legalized abortion (reform states).\n", "Choo and Siow thus offer a diffs-in-diffs approach in order to compute the\n", "change in the value of marriage.\n", "\n", "* Choo and Siow's data are thus made of the marriages between men and women in reformed states (R) vs nonreformed states (NR), in 1972 and in 1982. One should expect to see a higher drop in marriage value in NR states." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "thepath = 'https://raw.githubusercontent.com/math-econ-code/mec_optim_2021-01/master/data_mec_optim/marriage-ChooSiow/'\n", "\n", "n_singles = pd.read_csv(thepath+'n_singles.txt', sep='\\t', header = None)\n", "marr = pd.read_csv(thepath+'marr.txt', sep='\\t', header = None)\n", "navail = pd.read_csv(thepath+'n_avail.txt', sep='\\t', header = None)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data used by Choo and Siow is census data on marriages between age categories, from age 16 (row/column 0) to age 75 (row/age 59). It is thus 60x60 tables:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789...50515253545556575859
02270410954393215506724141901147864...0000000000
1402663898016368571421161101691427260127...0000000000
239219497534031518001540122911429856611338...0000000000
32950044788472123942916730627032111910762438...0000000000
41895232860401034348836265153186076287215521246...0000000000
\n", "

5 rows × 60 columns

\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 8 9 ... 50 \\\n", "0 22704 10954 3932 1550 672 414 190 114 78 64 ... 0 \n", "1 40266 38980 16368 5714 2116 1101 691 427 260 127 ... 0 \n", "2 39219 49753 40315 18001 5401 2291 1429 856 611 338 ... 0 \n", "3 29500 44788 47212 39429 16730 6270 3211 1910 762 438 ... 0 \n", "4 18952 32860 40103 43488 36265 15318 6076 2872 1552 1246 ... 0 \n", "\n", " 51 52 53 54 55 56 57 58 59 \n", "0 0 0 0 0 0 0 0 0 0 \n", "1 0 0 0 0 0 0 0 0 0 \n", "2 0 0 0 0 0 0 0 0 0 \n", "3 0 0 0 0 0 0 0 0 0 \n", "4 0 0 0 0 0 0 0 0 0 \n", "\n", "[5 rows x 60 columns]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "marr.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The data also includes the number of single individuals per age category:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
0123456789...50515253545556575859
01010132907226772448597919454792341370315797253618171614152228...68383691776135870752691126090960749653026292261117
1790793671332602900492020404882313755255414195418141283128787...197456201183191519218031219157199926200052202146210315202775
\n", "

2 rows × 60 columns

\n", "
" ], "text/plain": [ " 0 1 2 3 4 5 6 7 8 \\\n", "0 1010132 907226 772448 597919 454792 341370 315797 253618 171614 \n", "1 790793 671332 602900 492020 404882 313755 255414 195418 141283 \n", "\n", " 9 ... 50 51 52 53 54 55 56 \\\n", "0 152228 ... 68383 69177 61358 70752 69112 60909 60749 \n", "1 128787 ... 197456 201183 191519 218031 219157 199926 200052 \n", "\n", " 57 58 59 \n", "0 65302 62922 61117 \n", "1 202146 210315 202775 \n", "\n", "[2 rows x 60 columns]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "n_singles.transpose().head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following loads the data and rescales them appropriately:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total number of individuals in the sample = 14885023.\n" ] } ], "source": [ "def prepareChooSiow_data(n_singles,marr,nbCateg = 25):\n", " μhat_x0 = np.array(n_singles[0].iloc[0:nbCateg])\n", " μhat_0y = np.array(n_singles[1].iloc[0:nbCateg])\n", " μhat_xy = np.array(marr.iloc[0:nbCateg:,0:nbCateg])\n", " Nhat = 2 * μhat_xy.sum() + μhat_x0.sum() + μhat_0y.sum() \n", " μhat_a = np.concatenate([μhat_xy.flatten(),μhat_x0,μhat_0y]) / Nhat # rescale the data so that the total number of individual is one\n", "\n", " return μhat_a, Nhat,nbCateg,nbCateg\n", "\n", "cs_μhat_a, cs_Nhat,cs_nbx,cs_nby = prepareChooSiow_data(n_singles,marr)\n", "\n", "print('Total number of individuals in the sample = '+str(cs_Nhat)+'.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Building the model\n", "\n", "The analysis here follows [GS], who build on the logit model by [CS]. \n", "\n", "* Consider a heterosexual marriage matching market. The set of types (observable characteristics) is $\\mathcal{X}$ for men, and $\\mathcal{Y}$ for women. There are $n_{x}$ men of type $x$, and $m_{y}$ women of type $y$.\n", "\n", "* Assume that if a man $i\\in\\mathcal{I}$ of type $x_{i}$ and a woman $j\\in\\mathcal{J}$ of type $y_{j}$ match, they get respective utilities \\begin{align*} & \\alpha_{x_{i}y_{j}}+w_{ij}+\\varepsilon_{iy_{j}}\\\\ & \\gamma_{x_{i}y_{j}}-w_{ij}+\\eta_{x_{i}j} \\end{align*} where $w_{ij}$ is the transfer from $i$ to $j$. If they remain single $i$ and $j$ get respectively $\\varepsilon_{i0}$ and $\\eta_{0j}$.\n", "\n", "* The random utility vectors $\\left( \\varepsilon_{y}\\right) $ and $\\left( \\eta_{x}\\right) $ are drawn from probability distributions $\\mathbf{P}_{x}$ and $\\mathbf{Q}_{y}$, respectively. In the sequel we shall work with a finite number of agents of each type, and then we'll investigate the limit of these results.\n", "\n", "* The fact that the preferences for the other side of the market terms $\\varepsilon_{iy_{j}}$ and $\\eta_{x_{i}}$ do not vary within observed types is a very important implicit assumption called **separability**. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Optimal matching\n", "\n", "\n", "The matching surplus between $i$ and $j$ is therefore $$\\tilde{\\Phi}_{ij}=\\Phi_{x_{i}y_{j}}+\\varepsilon_{iy_{j}}+\\eta_{x_{i}j}$$ where $\\Phi_{xy}=\\alpha_{xy}+\\gamma_{xy}$. The value of optimal matching is thus, under its dual form, \\begin{align*}\\min_{u_{i},v_{j}} & \\sum_{i\\in\\mathcal{I}}u_{i}+\\sum_{j\\in\\mathcal{J}} v_{j}\\\\ s.t.~ & u_{i}+v_{j}\\geq\\Phi_{x_{i}y_{j}}+\\varepsilon_{iy_{j}}+\\eta_{x_{i}j}\\\\ & u_{i}\\geq\\varepsilon_{i0}\\\\ & v_{j}\\geq\\eta_{j0} \\end{align*}\n", "\n", "Written like this, the lp has $\\left\\vert \\mathcal{I}\\right\\vert +\\left\\vert \\mathcal{J}\\right\\vert $ variables and $\\left\\vert \\mathcal{I} \\right\\vert \\times\\left\\vert \\mathcal{J}\\right\\vert +\\left\\vert \\mathcal{I} \\right\\vert +\\left\\vert \\mathcal{J}\\right\\vert $ constraints. Assuming that there are $K$ individuals per type for each type, this is $K\\left( \\left\\vert \\mathcal{X}\\right\\vert +\\left\\vert \\mathcal{Y}\\right\\vert \\right) $ variables and $K^{2}\\left( \\left\\vert \\mathcal{X}\\right\\vert \\times\\left\\vert \\mathcal{Y}\\right\\vert \\right) +K\\left( \\left\\vert \\mathcal{X}\\right\\vert +\\left\\vert \\mathcal{Y}\\right\\vert \\right) $ constraints.\n", "\n", "The number of constraints is **quadratic** with respect to $K$. Fortunately, a little thinking about the implications of separability will help us reduce this complexity.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A property of equilibrium\n", "\n", "We have:\n", "\n", "---\n", "\n", "**Lemma**. Consider the set $\\mathcal{I}_{xy}$ of men of type $x$ matched to women of type $y$ at equilibrium. If $\\mathcal{I}_{xy}$ is nonempty, then $u_{i}-\\varepsilon_{iy}$ is a constant across $\\mathcal{I}_{xy}$.\n", "\n", "---\n", "\n", "**Proof**. For $i\\in\\mathcal{I}$ such that $x_{i}=x$,\n", "\\begin{align*}\n", "u_{i} & =\\max_{j\\in\\mathcal{J}}\\left\\{ \\tilde{\\Phi}_{ij}-v_{j}%\n", ",\\varepsilon_{i0}\\right\\} \\\\\n", "& =\\max_{y\\in\\mathcal{Y}}\\left\\{ U_{xy}+\\varepsilon_{iy},\\varepsilon\n", "_{i0}\\right\\}\n", "\\end{align*}\n", "where $U_{xy}=\\max_{j:y_{j}=y}\\left\\{ \\Phi_{xy}+\\eta_{x_{i}j}-v_{j}\\right\\}\n", "$, thus $u_{i}\\geq U_{xy}+\\varepsilon_{iy}$ with equality on $\\mathcal{I}%\n", "_{xy}$. With similar notations, $v_{j}\\geq V_{xy}+\\eta_{xj}$ with equality on\n", "$\\mathcal{J}_{xy}$. As a result, if $\\mathcal{I}_{xy}$ is nonempty, then\n", "$U_{xy}+V_{xy}=\\Phi_{xy}$ and $\\forall i\\in\\mathcal{I}_{xy},$ $u_{i}%\n", "=U_{xy}+\\varepsilon_{iy}$.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A simplification\n", "\n", "In the sequel, we shall see that *adding* an auxiliary variable to\n", "the previous lp will lead to *decreasing* the computational complexity of\n", "the problem.\n", "\n", "Observe that the first set of constraints is reexpressed by saying that,\n", "for every $x\\in\\mathcal{X}$, $y\\in\\mathcal{Y}$,\n", "$$\n", "\\min_{i:x_{i}=x}\\left\\{ u_{i}-\\varepsilon_{iy}\\right\\} +\\min_{j:y_{j}\n", "=y}\\left\\{ v_{j}-\\eta_{xj}\\right\\} \\geq\\Phi_{xy}.\n", "$$\n", "\n", "\n", "Hence, letting $U_{xy}=\\min_{i:x_{i}=x}\\left\\{ u_{i}-\\varepsilon\n", "_{iy}\\right\\} $ and $V_{xy}=\\min_{j:y_{j}=y}\\left\\{ v_{j}-\\eta_{xj}\\right\\}\n", "$, a solution of the previous lp should satisfy\n", "$$\n", "u_{i}=\\max_{y\\in\\mathcal{Y}}\\left\\{ U_{xy}+\\varepsilon_{iy},\\varepsilon\n", "_{i0}\\right\\} \\text{ and }v_{j}=\\max_{x\\in\\mathcal{X}}\\left\\{ V_{xy}\n", "+\\varepsilon_{xj},\\varepsilon_{0j}\\right\\} .\n", "$$\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The problem rewrites as\n", "\\begin{align}\n", "\\min_{u_{i},v_{j},U_{xy},V_{xy}} & \\sum_{i\\in\\mathcal{I}}u_{i}+\\sum\n", "_{j\\in\\mathcal{J}}v_{j}\\label{simplifiedDual}\\\\\n", "s.t.~ & U_{xy}+V_{xy}\\geq\\Phi_{xy}~\\left[ \\mu_{xy}\\geq0\\right] \\nonumber\\\\\n", "& u_{i}\\geq U_{x_{i}y}+\\varepsilon_{iy_{j}}~\\left[ \\mu_{iy}\\right]\n", "\\nonumber\\\\\n", "& v_{j}\\geq V_{xy_{j}}+\\eta_{x_{i}j}~\\left[ \\mu_{xi}\\right] \\nonumber\\\\\n", "& u_{i}\\geq\\varepsilon_{i0}~\\left[ \\mu_{i0}\\right] \\nonumber\\\\\n", "& v_{j}\\geq\\eta_{j0}~\\left[ \\mu_{0x}\\right] \\nonumber\n", "\\end{align}\n", "\n", "This problem has $K\\left( \\left\\vert \\mathcal{X}\\right\\vert +\\left\\vert\n", "\\mathcal{Y}\\right\\vert \\right) +\\left\\vert \\mathcal{X}\\right\\vert\n", "\\times\\left\\vert \\mathcal{Y}\\right\\vert $ variables and $\\left( \\left\\vert\n", "\\mathcal{X}\\right\\vert \\times\\left\\vert \\mathcal{Y}\\right\\vert \\right)\n", "+K\\left( 2\\left\\vert \\mathcal{X}\\right\\vert \\times\\left\\vert \\mathcal{Y}\n", "\\right\\vert +\\left\\vert \\mathcal{X}\\right\\vert +\\left\\vert \\mathcal{Y}\n", "\\right\\vert \\right) $ constraints.\n", "\n", "The number of constraint is now **linear** with respect to $K$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Consequences\n", "\n", "**1. Lagrange multipliers:**\n", "* The Lagrange multiplier $\\mu_{xy}$ is interpreted as the number of matchings between types $x$ and $y$.\n", "\n", "* The Lagrange multiplier $\\mu_{iy}$ ($y\\in\\mathcal{Y}_{0}$) is interpreted as a 0-1 indicator that man $i$ chooses a type $y$\n", "\n", "* The Lagrange multiplier $\\mu_{xj}$ ($x\\in\\mathcal{X}_{0}$) is interpreted as a 0-1 indicator that woman $j$ chooses a\\ type $x$\n", "\n", "**2. Utilities:**\n", "* Man $i$ solves a discrete choice problem $u_{i}=\\max_{y\\in\\mathcal{Y}\n", "}\\left\\{ U_{xy}+\\varepsilon_{iy},\\varepsilon_{i0}\\right\\} $\n", "\n", "* Woman $j$ solve a discrete choice problem $v_{j}=\\max_{x\\in\\mathcal{X}\n", "}\\left\\{ V_{xy}+\\eta_{xj},\\eta_{0j}\\right\\} .$\n", "\\end{itemize}\n", "\n", "* $U_{xy}$ and $V_{xy}$ are related by $U_{xy}+V_{xy}\\geq\\Phi_{xy}$ with\n", "equality if $\\mu_{xy}>0$.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Large market limit\n", "\n", "Now look at the limit of previous markets when the number of market participants gets large, holding fixed the frequency of each types.\n", "\n", "In the large population limit $n_{x}$ and $m_{y}$ are now interpreted as the mass distribution of respective types $x$ and $y$.\n", "\n", "We shall from now on assume that $\\mathbf{P}_{x}$ and $\\mathbf{Q}_{y}$, the distributions of random utility vectors $\\left( \\varepsilon_{y}\\right) $ and $\\left( \\eta_{x}\\right) $, have a density with full support. This will ensure that the Emax operators associated with the choice problems of the men and the women respectively \\begin{align*} & G_x(U_{x.}) = \\mathbb{E}_\\mathbf{P} \\left[\\max_{y\\in\\mathcal{Y}\n", "}\\left\\{ U_{xy}+\\varepsilon_{iy},\\varepsilon_{i0}\\right\\} \\right]\\text{, and }\\\\ & H_y(V_{.y}) = \\mathbb{E}_\\mathbf{Q} \\left[\\max_{x\\in\\mathcal{X}\n", "}\\left\\{ V_{xy}+\\eta_{xj},\\eta_{0j}\\right\\} \\right],\\end{align*}as well as the corresponding entropies of choice $G_x^{\\ast}$ and $H_y^{\\ast}$ are continuously differentiable." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Under these assumptions, the problem becomes\n", "\\begin{align*}\n", "\\min_{U,V} ~& G\\left( U\\right) +H\\left( V\\right) \\\\\n", "s.t.~ & U_{xy}+V_{xy}\\geq\\Phi_{xy}~\\left[ \\mu_{xy}\\right]\n", "\\end{align*}\n", "where\n", "\\begin{align*}\n", "G\\left( U\\right) & =\\sum_{x\\in\\mathcal{X}}n_{x}\\mathbb{E}_{\\mathbf{P}%\n", "}\\left[ \\max_{y\\in\\mathcal{Y}}\\left\\{ U_{xy}+\\varepsilon_{iy},\\varepsilon\n", "_{i0}\\right\\} \\right] \\\\\n", "H\\left( V\\right) & =\\sum_{y\\in\\mathcal{Y}}m_{y}\\mathbb{E}_{\\mathbf{Q}%\n", "}\\left[ \\max_{x\\in\\mathcal{X}}\\left\\{ V_{xy}+\\eta_{xj},\\eta_{0j}\\right\\}\n", "\\right]\n", "\\end{align*}\n", "\n", "\n", "By first order conditions,\n", "$$\n", "\\frac{\\partial G\\left( U\\right) }{\\partial U_{xy}}=\\mu_{xy}=\\frac{\\partial\n", "H\\left( V\\right) }{\\partial V_{xy}}.\n", "$$\n", "and $\\mu_{xy}>0$ for every $x\\in\\mathcal{X}$ and $y\\in\\mathcal{Y}$.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Social planner's problem\n", "\n", "The primal problem corresponding the problem above is\n", "$$\n", "\\max_{\\mu_{xy}\\geq0}\\sum_{\\substack{x\\in\\mathcal{X}\\\\y\\in\\mathcal{Y}}}\\mu\n", "_{xy}\\Phi_{xy}-\\mathcal{E}\\left( \\mu\\right)\n", "$$\n", "where\n", "$$\n", "\\mathcal{E}\\left( \\mu\\right) =G^{\\ast}\\left( \\mu\\right) +H^{\\ast}\\left(\n", "\\mu\\right)\n", "$$\n", "\n", "\n", "Recall $G^{\\ast}\\left( \\mu\\right) =\\max\\left\\{ \\sum_{xy}\\mu\n", "_{xy}U_{xy}-G\\left( U\\right) \\right\\} $ is the Legendre transform of $G$, and similarly for $H^{\\ast}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Constructing the surplus:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def build_surplus(nbx,nby):\n", " xs, ys = np.repeat(range(1,nbx+1),nby).reshape(nbx,nby)/nbx, np.repeat(range(1,nby+1),nbx).reshape(nbx,nby).T/nby\n", " phi1_xy = -((xs-ys)**2).flatten()\n", " phimat = np.vstack([phi1_xy,\n", " -((xs-ys)**2 * ((xs+ys)/2)**2).flatten(),\n", " -( (xs-ys)**2 * ((xs+ys-2)/2)**2).flatten(),\n", " -( (xs-ys)**2 *(xs+ys-1)**2).flatten()\n", " ]).T\n", " phimat_mean = phimat.mean(axis = 0 , keepdims=True)\n", " phimat_stdev = phimat.std(axis = 0 , keepdims=True)\n", " φ_xy_k = np.hstack([np.ones((nbx*nby,1)) ,(phimat - phimat_mean)/phimat_stdev] )\n", " _,nbk = φ_xy_k.shape\n", " return np.vstack( [φ_xy_k ,np.zeros( (nbx,nbk) ),np.zeros( (nby,nbk) )])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Denoting $a$ a generic element of \n", "\n", "$\\mathcal{A}=\\mathcal{X}\\times \\mathcal{Y}\\cup \\mathcal{X}\\times \\left\\{\n", "0\\right\\} \\cup \\left\\{ 0\\right\\} \\times \\mathcal{Y}$ the set of matches, and
\n", "$\\mathcal{K}=\\left\\{ 1,...,K\\right\\} $ the index set of the parameter,
\n", "we may view \n", "* $\\Phi =\\left( \\phi _{a}^{k}\\right) $ as a $\\mathcal{A}\\times \n", "\\mathcal{K}$ matrix, and\n", "* $\\hat{\\mu}$ as an element of $\\mathbb{R}^\\mathcal{A}$,\n", "\n", "and we define $d_{xy}=2$, $d_{x0}=d_{0y}=1$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's create a class `TUlogit` that handles this:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "class TUlogit:\n", " def __init__(self,nbx,nby,φ_a_k , μhat_a ):\n", " self.nba, self.nbk = φ_a_k.shape\n", " self.nbx, self.nby = nbx, nby\n", " self.φ_a_k, self.μhat_a = φ_a_k, μhat_a\n", " self.d_a = np.append(2*np.ones(nbx*nby),np.ones(nbx+nby))\n", " μ_x_y = self.μhat_a[0:(nbx*nby)].reshape((nbx,nby))\n", " self.n_x = μ_x_y.sum(axis = 1) + self.μhat_a[(nbx*nby):(nbx*nby+nbx)]\n", " self.m_y = μ_x_y.sum(axis = 0) + self.μhat_a[-nby:]\n", "\n", " \n", "choo_siow_mkt = TUlogit(cs_nbx,cs_nby,build_surplus(cs_nbx,cs_nby),cs_μhat_a) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Identification of the matching surplus\n", "\n", "\n", "\n", "---\n", "\n", "**Theorem.** By first order conditions, we get the identifcation formula of $\\Phi$:\n", "$$\n", "\\Phi_{xy}=\\frac{\\partial G^{\\ast}\\left( \\mu\\right) }{\\partial\\mu_{xy}}\n", "+\\frac{\\partial H^{\\ast}\\left( \\mu\\right) }{\\partial\\mu_{xy}}.\n", "$$\n", "\n", "---\n", "\n", "This means that the surplus function is identified *nonparametrically* given the matching patterns $\\mu$ and assuming a fixed distribution of unobserved heterogeneity.\n", "\n", "Hence only the joint surplus $\\Phi\n", "_{xy}=\\alpha_{xy}+\\gamma_{xy}$ is identified. However, if the transfers\n", "$\\hat{w}_{xy}$ are observed too (e.g. wages in labour market), then\n", "$U_{xy}=\\alpha_{xy}+w_{xy}$ and $V_{xy}=\\gamma_{xy}-w_{xy}$, so that $\\alpha$\n", "and $\\gamma$ are separately identified by\n", "$$\n", "\\left\\{\n", "\\begin{array}\n", "[c]{c}%\n", "\\hat{\\alpha}_{xy}=\\frac{\\partial G^{\\ast}\\left( \\mu\\right) }{\\partial\\mu_{xy}}-\\hat{w}_{xy}\\\\\n", "\\hat{\\gamma}_{xy}=\\frac{\\partial H^{\\ast}\\left( \\mu\\right) }{\\partial\\mu_{xy}}+\\hat{w}_{xy}%\n", "\\end{array}\n", "\\right.\n", "$$\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Choo and Siow's logit model\n", "\n", "In Choo and Siow's model [CS], the heterogeneities in tastes are Gubmel,\n", "we have\n", "$$\n", "\\mathcal{E}\\left( \\mu\\right) =2\\sum_{\\substack{x\\in\\mathcal{X}%\n", "\\\\y\\in\\mathcal{Y}}}\\mu_{xy}\\log\\mu_{xy}+\\sum_{x\\in\\mathcal{X}}\\mu_{x0}\\log\n", "\\mu_{x0}+\\sum_{y\\in\\mathcal{Y}}\\mu_{0y}\\log\\mu_{0y}.\n", "$$\n", "Note that $\\mathcal{E}\\left( \\mu\\right) < + \\infty$ if and only if $\\mu\n", "\\in\\mathcal{M}\\left( n,m\\right) $.\n", "\n", "\n", "By first order conditions above, Choo-Siow's TU-logit model implies the\n", "following matching function:\n", "$$\n", "\\mu_{xy}=M_{xy}\\left( \\mu_{x0},\\mu_{0y}\\right) :=\\sqrt{\\mu_{x0}}\\sqrt\n", "{\\mu_{0y}}\\exp\\left( \\frac{\\Phi_{xy}}{2}\\right) \n", "$$\n", "\n", "This is a gravity equation of sorts. The full link with gravity equations is explored in the next lecture. \n", "\n", "As a result, $\\partial\\mathcal{E}\\left( \\mu\\right) /\\partial\\mu\n", "_{xy}=2\\log\\mu_{xy}-\\log\\mu_{x0}-\\log\\mu_{0y}$, which implies that $\\Phi_{xy}$\n", "is estimated by *Choo and Siow's identification formula*\n", "$$\n", "\\hat{\\Phi}_{xy}=\\log\\frac{\\hat{\\mu}_{xy}^{2}}{\\hat{\\mu}_{x0}\\hat{\\mu}_{0y}}.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solving equilibrium in the Choo-Siow model\n", "\n", "Write down the equilibrium equations in the TU-logit model:\n", "$$\n", "\\left\\{\n", "\\begin{array}\n", "[c]{c}\n", "\\sum_{y\\in\\mathcal{Y}}\\sqrt{\\mu_{x0}}\\sqrt{\\mu_{0y}}\\exp\\left( \\frac\n", "{\\Phi_{xy}}{2}\\right) +\\mu_{x0}=n_{x}\\\\\n", "\\sum_{x\\in\\mathcal{X}}\\sqrt{\\mu_{x0}}\\sqrt{\\mu_{0y}}\\exp\\left( \\frac\n", "{\\Phi_{xy}}{2}\\right) +\\mu_{0y}=m_{y}%\n", "\\end{array}\n", "\\right.\n", "$$\n", "\n", "\n", "Setting $a_{x}=\\sqrt{\\mu_{x0}}$, $b_{y}=\\sqrt{\\mu_{0y}}$, and\n", "$K_{xy}=\\exp\\left( \\Phi_{xy}/2\\right) $, this rewrites as\n", "$$\n", "\\left\\{\n", "\\begin{array}\n", "[c]{c}\n", "\\sum_{y\\in\\mathcal{Y}}K_{xy}a_{x}b_{y}+a_{x}^{2}=n_{x}\\\\\n", "\\sum_{x\\in\\mathcal{X}}K_{xy}a_{x}b_{y}+b_{y}^{2}=m_{y}%\n", "\\end{array}\n", "\\right.$$\n", "\n", "which is a variant of the equations previously seen to accomodate unmatched agents." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can easily adapt the IPFP to this setting. The IPFP will consists in iteratively solving quadratic equations:\n", "$$\n", "\\left\\{\n", "\\begin{array}\n", "[c]{l}\n", "a_{x}^{2t+1}=\\sqrt{n_{x}+\\left( \\sum_{y\\in\\mathcal{Y}}b_{y}^{2t}%\n", "K_{xy}/2\\right) ^{2}}-\\sum_{y\\in\\mathcal{Y}}b_{y}^{2t}K_{xy}/2\\\\\n", "b_{y}^{2t+2}=\\sqrt{m_{y}+\\left( \\sum_{x\\in\\mathcal{X}}a_{x}^{2t+1}%\n", "K_{xy}/2\\right) ^{2}}-\\sum_{x\\in\\mathcal{X}}a_{x}^{2t+1}K_{xy}/2\n", "\\end{array}\n", "\\right.\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dual problem\n", "\n", "The dual problem is given by\n", "$$\n", "\\min_{u,v}\\left\\{\n", "\\begin{array}\n", "[c]{c}%\n", "\\sum_{x}n_{x}u_{x}+\\sum_{y}m_{y}v_{y}\\\\\n", "+2\\sum_{xy}\\sqrt{n_{x}m_{y}}\\exp\\left( \\frac{\\Phi_{xy}-u_{x}-v_{y}}{2}\\right)\n", "\\\\\n", "+\\sum_{x}n_{x}\\exp\\left( -u_{x}\\right) +\\sum_{y}m_{y}\\exp\\left(\n", "-v_{y}\\right)\n", "\\end{array}\n", "\\right\\}\n", "$$\n", "\n", "\n", "**Remarks.**\n", "\n", "* This problem is an unconstrained convex optimization problem, so this formulation will be quite useful.\n", "\n", "* If $\\left( u,v\\right) $ is solution, $u_{x}=-\\log\\mu_{0|x}=-\\log\\left( \\mu_{x0}/n_{x}\\right) $ and $v_{y}=-\\log\\left( \\mu\n", "_{0|y}\\right) $.\n", "\n", "* Note that the IPFP algorithm just seen interprets as (blockwise) *coordinate descent* method in the dual problem.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DIY approach\n", "\n", "We first try the \"do-it-yourself\" approach, following [GS].\n", "Construct the objective function:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def ObjFunc(uvlambda,scale,mkt):\n", " nbx,nby,nbk=mkt.nbx,mkt.nby,mkt.nbk\n", " u_x,v_y,λ_k = uvlambda[0:nbx] / scale,uvlambda[nbx:(nbx+nby)]/scale,uvlambda[(nbx+nby):(nbx+nby+nbk)] / scale\n", " \n", " Phi_a = mkt.φ_a_k @ λ_k\n", " mu_xy = np.exp((Phi_a[0:(nbx*nby)].reshape((nbx,nby)) - u_x.reshape((-1,1))-v_y.reshape((1,-1)))/2).flatten()\n", " mu_x0 = np.exp(Phi_a[(nbx*nby):(nbx*nby+nbx)]-u_x)\n", " mu_0y = np.exp(Phi_a[-nby:]-v_y)\n", " \n", " return (mkt.n_x.dot(u_x)+mkt.m_y.dot(v_y) - (Phi_a * mkt.μhat_a).sum() + 2 * mu_xy.sum() + mu_x0.sum() + mu_0y.sum())\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Construct its gradient:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def grad_ObjFunc(uvlambda,scale,mkt):\n", " nbx,nby,nbk=mkt.nbx,mkt.nby,mkt.nbk\n", " u_x,v_y,λ_k = uvlambda[0:nbx]/scale,uvlambda[nbx:(nbx+nby)]/scale,uvlambda[(nbx+nby):(nbx+nby+nbk)]/scale\n", " \n", " Phi_a = mkt.φ_a_k @ λ_k\n", " mu_x_y = np.exp((Phi_a[0:(nbx*nby)].reshape((nbx,nby)) - u_x.reshape((-1,1))-v_y.reshape((1,-1)))/2).reshape((nbx,nby))\n", " mu_x0 = np.exp(Phi_a[(nbx*nby):(nbx*nby+nbx)]-u_x)\n", " mu_0y = np.exp(Phi_a[-nby:]-v_y)\n", " mu_a = np.concatenate((mu_x_y.flatten(),mu_x0,mu_0y))\n", " \n", " grad_u = mkt.n_x - mu_x_y.sum(axis = 0) - mu_x0\n", " grad_v = mkt.m_y - mu_x_y.sum(axis = 1) - mu_0y\n", " grad_lambda = (mu_a-mkt.μhat_a ) @ mkt.φ_a_k\n", " \n", " return np.concatenate((grad_u,grad_v,grad_lambda.flatten()))/scale" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([-2.59293961e+01, -2.59318975e+01, -2.59374157e+01, -2.59469373e+01,\n", " -2.59559106e+01, -2.59643718e+01, -2.59682064e+01, -2.59738609e+01,\n", " -2.59826665e+01, -2.59852835e+01, -2.59877521e+01, -2.59881513e+01,\n", " -2.59907795e+01, -2.59924915e+01, -2.59927869e+01, -2.59936593e+01,\n", " -2.59943827e+01, -2.59944988e+01, -2.59951420e+01, -2.59949721e+01,\n", " -2.59952618e+01, -2.59952660e+01, -2.59954071e+01, -2.59948270e+01,\n", " -2.59945361e+01, -2.59343738e+01, -2.59385664e+01, -2.59436264e+01,\n", " -2.59523431e+01, -2.59602771e+01, -2.59693410e+01, -2.59758246e+01,\n", " -2.59812204e+01, -2.59870677e+01, -2.59887056e+01, -2.59896728e+01,\n", " -2.59898079e+01, -2.59917911e+01, -2.59924536e+01, -2.59920896e+01,\n", " -2.59934063e+01, -2.59935473e+01, -2.59938525e+01, -2.59937524e+01,\n", " -2.59938244e+01, -2.59942826e+01, -2.59937386e+01, -2.59939844e+01,\n", " -2.59934018e+01, -2.59931882e+01, 6.24885633e+02, -8.02789702e-02,\n", " -8.05367354e-02, -6.73882331e-02, -5.27153039e-02])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "uvl_0 = np.repeat(0,choo_siow_mkt.nbx+choo_siow_mkt.nby+choo_siow_mkt.nbk)\n", "grad_ObjFunc(uvl_0,1,choo_siow_mkt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Optimize using `scipy.optimize`:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Warning: Desired error not necessarily achieved due to precision loss.\n", " Current function value: 5.594690\n", " Iterations: 36\n", " Function evaluations: 650\n", " Gradient evaluations: 639\n", "λ_GS_diy_k = [-10.35329 1.40193083 1.06731793 1.52711253 0.62662031]\n" ] } ], "source": [ "def fit_diy(self,scale=1,tol=1e-9):\n", " uvl_0 = np.zeros(self.nbx+self.nby+self.nbk)\n", " outcome = optimize.minimize(ObjFunc,jac = grad_ObjFunc,args=(scale,self), method = 'CG', x0=uvl_0,options={'gtol':tol,'disp': True})\n", " return outcome['x'][0:-self.nbk]/scale, outcome['x'][-self.nbk:]/scale\n", "\n", "TUlogit.fit_diy = fit_diy\n", "uv_GS_diy,λ_GS_diy_k = choo_siow_mkt.fit_diy(10000,tol = 1e-8)\n", "print('λ_GS_diy_k = '+str(λ_GS_diy_k))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## GLM computation\n", "\n", "We can reformulate the model as a *generalized linear model*, see [MN] for a reference on the latter." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Setting
\n", "$B=\n", "\\begin{pmatrix}\n", "I_{X}\\otimes 1_{Y} & 1_{X}\\otimes I_{Y} \\\\ \n", "I_{X} & 0_{X\\times Y} \\\\ \n", "0_{Y\\times X} & I_{Y}%\n", "\\end{pmatrix}%\n", "$
\n", "and
\n", "$C=\\left( -B, \\Phi \\right) ,$
\n", "which we code into:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "def TUlogit_C(self):\n", " nbx, nby, nbk = self.nbx, self.nby, self.nbk\n", " B = spr.vstack([spr.hstack([spr.kron(spr.identity(nbx), np.ones((nby,1))),spr.kron(np.ones((nbx,1)),spr.identity(nby))]),\n", " spr.hstack([spr.identity(nbx) , spr.csr_matrix((nbx,nby))]),\n", " spr.hstack([spr.csr_matrix((nby,nbx)),spr.identity(nby)])])\n", " return spr.hstack([-B,self.φ_a_k])\n", "\n", "TUlogit.C = TUlogit_C" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Letting $\\theta =\\left( u,v,\\lambda \\right) $,
\n", "one has that the estimation problem above reformulates as\n", "\n", "$\\min_{\\theta }\\left\\{ 1^{\\top }\\delta \\exp \\left( \\delta ^{-1}\\left(\n", "C\\theta \\right) \\right) -\\left( \\delta \\hat{\\mu}\\right) ^{\\top }\\delta\n", "^{-1}C\\theta \\right\\} $\n", "\n", "and thus it results from the Poisson regression of
\n", "$Y=\\hat{\\mu}$ on $X=\\delta ^{-1}C$ with weights vector $d$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Poisson coerciveness\n", "\n", "Coercivity detector:\n", "\n", "$\\max_{\\mu \\geq 0,u\\geq 0}u$\n", "\n", "s.t. $X^{\\top }\\mu +0u=b$\n", "\n", "$I\\mu -1u\\geq 0$\n", "\n" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def poisson_coerciveness(X,y):\n", " n,m = X.shape\n", " b = X.T @ y\n", " m=grb.Model()\n", " u = m.addMVar(shape = 1)\n", " mu = m.addMVar(shape=n)\n", " m.setObjective(1 * u, grb.GRB.MAXIMIZE)\n", " m.addConstr(X.T @ mu == b)\n", " m.addConstr( mu - np.ones((n,1)) @ u >= 0)\n", " m.optimize()\n", " if m.status == grb.GRB.Status.OPTIMAL:\n", " solution = np.array(m.getAttr('x'))\n", " u = solution[0]\n", " else:\n", " u = np.NaN\n", " return(u)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "We use the Poisson regression included in the `scikit-learn` library, see
\n", "https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "λ_GS_GLM_k = [-10.32125662 1.38355341 1.02107659 1.54579903 0.62931836]\n", "λ_GS_diy_k = [-10.35329 1.40193083 1.06731793 1.52711253 0.62662031]\n", "0.11428192310395208\n", "0.11428172615866336\n" ] } ], "source": [ "def TUlogit_fit_glm(self,tol = 1e-12,pretest = False):\n", " clf = linear_model.PoissonRegressor(fit_intercept=False,tol=tol ,verbose=3,alpha=0)\n", " clf.fit( spr.diags(1/ self.d_a) @ self.C(), self.μhat_a,sample_weight = self.d_a)\n", " return clf.coef_[0:-self.nbk], clf.coef_[-self.nbk:]\n", "\n", "TUlogit.fit_glm = TUlogit_fit_glm\n", "\n", "def TUlogit_isCoercive(self):\n", " return poisson_coerciveness( self.C(), self.μhat_a)\n", "\n", "TUlogit.isCoercive = TUlogit_isCoercive\n", "\n", "\n", "def TUlogit_assess(self,θ):\n", " C= self.C()\n", " return np.abs((np.exp(C @ θ) - self.μhat_a).reshape((1,-1)) @ C).max()\n", "\n", "TUlogit.assess = TUlogit_assess\n", "\n", "uv_GS_GLM, λ_GS_GLM_k = choo_siow_mkt.fit_glm()\n", "print(f'λ_GS_GLM_k = {λ_GS_GLM_k}')\n", "print(f'λ_GS_diy_k = {λ_GS_diy_k}')\n", "print(choo_siow_mkt.assess(np.append(uv_GS_GLM, λ_GS_GLM_k)))\n", "print(choo_siow_mkt.assess(np.append(uv_GS_diy, λ_GS_diy_k)))" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "--------------------------------------------\n", "Warning: your license will expire in 4 days\n", "--------------------------------------------\n", "\n", "Academic license - for non-commercial use only - expires 2021-10-25\n", "Using license file c:\\gurobi911\\gurobi.lic\n", "Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (win64)\n", "Thread count: 6 physical cores, 12 logical processors, using up to 12 threads\n", "Optimize a model with 730 rows, 676 columns and 5775 nonzeros\n", "Model fingerprint: 0x5bce378c\n", "Coefficient statistics:\n", " Matrix range [9e-04, 4e+00]\n", " Objective range [1e+00, 1e+00]\n", " Bounds range [0e+00, 0e+00]\n", " RHS range [5e-03, 1e-01]\n", "Presolve removed 51 rows and 50 columns\n", "Presolve time: 0.01s\n", "Presolved: 679 rows, 626 columns, 5050 nonzeros\n", "\n", "Iteration Objective Primal Inf. Dual Inf. Time\n", " 0 5.6173242e-03 3.680846e+00 0.000000e+00 0s\n", " 673 2.0886948e-05 0.000000e+00 0.000000e+00 0s\n", "\n", "Solved in 673 iterations and 0.04 seconds\n", "Optimal objective 2.088694766e-05\n" ] }, { "data": { "text/plain": [ "2.0886947663455243e-05" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "choo_siow_mkt.isCoercive()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 2 }