{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# Nonlinear optimization using JuMP (Julia for Mathematical Programming)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By Tyler Ransom, Duke University Social Science Research Institute\n", "\n", "tyler.ransom@duke.edu\n", "\n", "https://github.com/tyleransom" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Let's examine how to use a new `Julia` packaged called `JuMP` to illustrate some of Julia's powerful resources for optimizing objective functions of all kinds.\n", "\n", "For an overview of the `JuMP` package, see [here](http://jump.readthedocs.org/en/latest/). Essentially, `JuMP` makes use of `Julia`'s macros to greatly simplify construction and optimization of various problems.\n", "\n", "Here are some features of `JuMP`:\n", "- interfaces seamlessly with many industry-grade solvers (e.g. AMPL)\n", "- can be used to solve linear programming, nonlinear programming, and\n", "many other types of problems (including constrained optimization)\n", "- automatically differentiates the objective function (*not* numerical\n", "differentiation), resulting in speed gains\n", "- user-friendly model construction: user simply writes the objective\n", "function and any constraints in mathematical notation; JuMP then\n", "translates this into binaries at compile time\n", "\n", "In this illustration, we will use JuMP for nonlinear programming problems (i.e. maximum likelihood estimation). We will be using the open-source optimizer `Ipopt` (pronounced eye-PEE-opt)\n", "\n", "So let's load the packages we'll need and get started!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "using JuMP\n", "using Ipopt" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now let's create some data inside of a function:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "datagen (generic function with 1 method)" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "function datagen(N,T)\n", " ## Generate data for a linear model to test optimization\n", " srand(1234)\n", " \n", " N = convert(Int64,N) #inputs to functions such as -ones- need to be integers!\n", " T = convert(Int64,T) #inputs to functions such as -ones- need to be integers!\n", " const n = convert(Int64,N*T) # use -const- as a way to declare a variable to be global (so other functions can access it)\n", " \n", " # generate the Xs\n", " const X = cat(2,ones(N*T,1),5+3*randn(N*T,1),rand(N*T,1),\n", " 2.5+2*randn(N*T,1),15+3*randn(N*T,1),\n", " .7-.1*randn(N*T,1),5+3*randn(N*T,1),\n", " rand(N*T,1),2.5+2*randn(N*T,1),\n", " 15+3*randn(N*T,1),.7-.1*randn(N*T,1),\n", " 5+3*randn(N*T,1),rand(N*T,1),2.5+2*randn(N*T,1),\n", " 15+3*randn(N*T,1),.7-.1*randn(N*T,1));\n", " \n", " # generate the betas (X coefficients)\n", " const bAns = [ 2.15; 0.10; 0.50; 0.10; 0.75; 1.2; 0.10; 0.50; 0.10; 0.75; 1.2; 0.10; 0.50; 0.10; 0.75; 1.2 ]\n", " \n", " # generate the std dev of the errors\n", " const sigAns = 0.3\n", " \n", " # generate the Ys from the Xs, betas, and error draws\n", " draw = 0 + sigAns*randn(N*T,1)\n", " const y = X*bAns+draw\n", " \n", " # return generated data so that other functions (below) have access\n", " return X,y,bAns,sigAns,n\n", "end" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now let's evaluate the function so that the function outputs are in the current scope:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [], "source": [ "X,y,bAns,sigAns,n = datagen(1e4,5);" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## OLS estimation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can estimate the `bAns` parameter vector using OLS:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bHat = (X'*X)\\(X'*y);\n", "sigHat = sqrt((y-X*bHat)'*(y-X*bHat)/(n-size(X,2)));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now we can compare the estimates with the solution:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "slideshow": { "slide_type": "subslide" } }, "outputs": [ { "data": { "text/plain": [ "17x2 Array{Float64,2}:\n", " 2.101 2.15\n", " 0.1 0.1 \n", " 0.497 0.5 \n", " 0.1 0.1 \n", " 0.751 0.75\n", " 1.215 1.2 \n", " 0.1 0.1 \n", " 0.505 0.5 \n", " 0.1 0.1 \n", " 0.75 0.75\n", " 1.208 1.2 \n", " 0.1 0.1 \n", " 0.509 0.5 \n", " 0.101 0.1 \n", " 0.75 0.75\n", " 1.221 1.2 \n", " 0.3 0.3 " ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[round([bHat;sigHat],3) [bAns;sigAns]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The estimator performs well." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Estimating the parameters of a normal MLE using `JuMP`" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now let's estimate this using maximum likelihood under the (valid) assumption that the errors in our `datagen()` function are independently drawn from a normal distribution with mean 0 and standard deviation to be estimated.\n", "\n", "Recall that the likelihood function for this scenario is:\n", "$$ L_{i} = \\prod_{i=1}^{N} \\frac{1}{\\sqrt{2\\pi\\sigma^{2}}} \\exp \\left( -\\frac{\\left(y_{i}-x_{i}\\beta\\right)^{2}}{2\\sigma^{2}}\\right) $$\n", "\n", "or, more simply,\n", "\n", "$$ L_{i} = \\prod_{i=1}^{N} \\frac{1}{\\sigma} \\phi \\left(\\frac{y_{i}-x_{i}\\beta}{\\sigma}\\right) $$\n", "\n", "where $\\phi\\left(\\cdot\\right)$ is the probability density function of the standard normal distribution." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "In maximum likelihood estimation, we search for the parameters that maximize the `log` likelihood function. Our function above becomes:\n", "\n", "$$ l_{i} = \\sum_{i=1}^{N} -\\frac{1}{2}\\log(2\\pi) - \\log(\\sigma) -\\frac{\\left(y_{i}-x_{i}\\beta\\right)^{2}}{2\\sigma^{2}} $$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### `JuMP` syntax" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`JuMP` requires the following syntax elements:\n", "- model name\n", "- solver\n", "- variable (i.e. a parameter to search over)\n", "- objective function\n", "- `solve()` call\n", "\n", "We will construct these line-by-line below." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "First, let's decleare the model name and solver:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [], "source": [ "myMLE = Model(solver=IpoptSolver(tol=1e-6));" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now let's tell `JuMP` which variables to search over. For the example above, this is the vector $\\beta$ and the scalar $\\sigma$." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "slideshow": { "slide_type": "-" } }, "outputs": [], "source": [ "@defVar(myMLE, b[i=1:size(X,2)]);\n", "@defVar(myMLE, s>=0.0);" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now we will write the objective function, which is the log likelihood function from above. We will also tell `JuMP` that we are maximizing." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true, "slideshow": { "slide_type": "-" } }, "outputs": [], "source": [ "@setNLObjective(myMLE, Max, (n/2)*log(1/(2*pi*s^2))-sum{(y[i]-sum{X[i,k]*b[k], \n", " k=1:size(X,2)})^2, i=1:size(X,1)}/(2s^2));" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we specify the summation term over $i$ using `sum{ ,i=1:size(X,1)}`. We also have to specify the linear algebra implied by `X*b` as another `sum`, this time over $k$. This is because `JuMP` doesn't understand matrix multiplication. This last fact adds a little bit of syntax to the objective function, but overall it is very readable." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Now that we have all of our elements set up, we can issue a `solve()` call and ask `JuMP` to perform the optimization:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ ":Optimal" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "status = solve(myMLE)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We see that the solver finished at a solution that is a local maximum. To store the estimated values of the parameters, we can use the `getValue()` functions, applied to each separate parameter:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "bOpt = getValue(b[:]);\n", "sOpt = getValue(s);" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "And if we compare them with the solution, we see that we have indeed obtained the correct parameter estimates:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "17x2 Array{Float64,2}:\n", " 2.101 2.15\n", " 0.1 0.1 \n", " 0.497 0.5 \n", " 0.1 0.1 \n", " 0.751 0.75\n", " 1.215 1.2 \n", " 0.1 0.1 \n", " 0.505 0.5 \n", " 0.1 0.1 \n", " 0.75 0.75\n", " 1.208 1.2 \n", " 0.1 0.1 \n", " 0.509 0.5 \n", " 0.101 0.1 \n", " 0.75 0.75\n", " 1.221 1.2 \n", " 0.3 0.3 " ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[round([bOpt;sOpt],3) [bAns;sigAns]]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can also return other helpful values such as the objective function value at the optimum, which is the log likelihood of the estimated model." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "-10714.712841850938" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "getObjectiveValue(myMLE)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Additionally, we can obtain the Hessian matrix of the model, which serves as a key component to the variance-covariance matrix that is used for statistical inference. The syntax for this is a bit messy, but I will put it below for reference:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true, "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "this_par = myMLE.colVal\n", "m_eval = JuMP.JuMPNLPEvaluator(myMLE);\n", "MathProgBase.initialize(m_eval, [:ExprGraph, :Grad, :Hess])\n", "hess_struct = MathProgBase.hesslag_structure(m_eval)\n", "hess_vec = zeros(length(hess_struct[1]))\n", "numconstr = length(m_eval.m.linconstr) + length(m_eval.m.quadconstr) + length(m_eval.m.nlpdata.nlconstr)\n", "dimension = length(myMLE.colVal)\n", "MathProgBase.eval_hesslag(m_eval, hess_vec, this_par, 1.0, zeros(numconstr))\n", "this_hess_ld = sparse(hess_struct[1], hess_struct[2], hess_vec, dimension, dimension)\n", "hOpt = this_hess_ld + this_hess_ld' - sparse(diagm(diag(this_hess_ld)));\n", "hOpt = -full(hOpt); #since we are maximizing\n", "seOpt = sqrt(diag(full(hOpt)\\eye(size(hOpt,1))));" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The above code obtained the estimated hessian matrix, took the negative of it (because we are maximizing), and then obtained standard errors of our parameter estimates from the diagonal elements of the inverse of the negative hessian." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "17x2 Array{Float64,2}:\n", " 2.101 0.021\n", " 0.1 0.0 \n", " 0.497 0.005\n", " 0.1 0.001\n", " 0.751 0.0 \n", " 1.215 0.013\n", " 0.1 0.0 \n", " 0.505 0.005\n", " 0.1 0.001\n", " 0.75 0.0 \n", " 1.208 0.013\n", " 0.1 0.0 \n", " 0.509 0.005\n", " 0.101 0.001\n", " 0.75 0.0 \n", " 1.221 0.013\n", " 0.3 0.001" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[round([bOpt;sOpt],3) round(seOpt,3)]" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "We can compare these with the OLS standard errors and confirm that they are the same:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false, "slideshow": { "slide_type": "fragment" } }, "outputs": [ { "data": { "text/plain": [ "16x2 Array{Float64,2}:\n", " 2.101 0.021\n", " 0.1 0.0 \n", " 0.497 0.005\n", " 0.1 0.001\n", " 0.751 0.0 \n", " 1.215 0.013\n", " 0.1 0.0 \n", " 0.505 0.005\n", " 0.1 0.001\n", " 0.75 0.0 \n", " 1.208 0.013\n", " 0.1 0.0 \n", " 0.509 0.005\n", " 0.101 0.001\n", " 0.75 0.0 \n", " 1.221 0.013" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "seHat = sqrt((sigHat^2).*diag((X'*X)\\eye(size(X,2))));\n", "[round(bHat,3) round(seHat,3)]" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true, "slideshow": { "slide_type": "slide" } }, "source": [ "## Constrained optimization in JuMP" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Constraints can easily be added to the objective function. There are two ways of doing this:\n", "1. Directly to the `@defVar` statement, e.g. `@defVar(myMLE, s>=0.0);` restricts `s` to be weakly positive.\n", "2. Additional lines after the set of `@defVar` statements, but before the `@setNLObjective` statement. The syntax for this is, e.g. `@addConstraint(myMLE, b[15] == 0)` which would restrict the 15th element of the `b` vector to equal 0.\n", "\n", "`JuMP` accepts up to literally thousands of constraints, and its simple syntax makes coding much easier." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## The Final Word" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Pros of `JuMP`:\n", "- Versatile modeling language that takes advantage of `Julia`'s features.\n", "- Interfaces to a variety of optimizers, meaning the user can simply call a different optimizer without adjusting any underlying code.\n", "- Very fast. Roughly 3+ times faster than `Matlab`'s `fminunc` for a simple large-scale problem.\n", "- User can specify maximization or minimization on the fly without needed to re-express the objective function." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "Cons of `JuMP`:\n", "- Because JuMP does constrained optimization if the user specifies constraints on any of the parameters. Thus, the hessian it returns is the hessian of the corresponding *Lagrangian*, not the hessian of the *objective*. This makes inference a bit trickier.\n", "- No matrix multiplication allowed, so users must specify matrix operations in scalar form" ] } ], "metadata": { "kernelspec": { "display_name": "Julia 0.4.3", "language": "julia", "name": "julia-0.4" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "0.4.3" } }, "nbformat": 4, "nbformat_minor": 0 }