{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# MathematicalProgram Tutorial\n", "For instructions on how to run these tutorial notebooks, please see the [index](./index.ipynb).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Background\n", "Many engineering problems can be formulated as mathematical optimization problems, and solved by numerical solvers. A generic mathematical optimization problem can be formulated as\n", "$\\begin{aligned}\n", "\\begin{array}{rl}\n", " \\min_x \\; & f(x)\n", " \\\\\\text{subject to} \\; & x \\in\\mathcal{S}\n", " \\end{array}\n", " \\qquad\n", " \\boxed{\n", " \\begin{array}{ll}\n", " \\text{The real-valued decision variable is} &x\\\\\n", " \\text{The real-valued cost function is} &f(x)\\\\\n", " \\text{The constraint set is} &\\mathcal{S}\\\\\n", " \\text{The optimal } x \\text{ that minimizes the cost function is} &x^*\n", " \\end{array}\n", " }\n", "\\end{aligned}$\n", "\n", "where $x$ is the real-valued decision variable(s), $f(x)$ is the real-valued *cost function*, $\\mathcal{S}$ is the constraint set for $x$. Our goal is to find the optimal $x^*$ within the constraint set $\\mathcal{S}$, such that $x^*$ minimizes the cost function $f(x)$.\n", "\n", "For example, the following optimization problem determines the value of $x$ \n", "that minimizes $x^3 + 2x + 1$ subject to $x \\ge 1$.\n", "$\\begin{aligned}\n", "\\begin{array}{rl}\n", "\\min_x\\;&x^3 + 2x + 1\\\\\n", "\\text{subject to}\\;\\;&x \\ge 1\n", "\\end{array}\n", "\\quad\n", "\\boxed{\n", " \\begin{array}{ll}\n", " \\text{The real-valued decision variable is} & x\\\\\n", " \\text{The real-valued cost function }f(x) \\text{ is} & x^3 + 2x + 1\\\\\n", " \\text{The set }\\mathcal{S} \\text{ of constraints is} & x \\ge 1\\\\\n", " \\text{The value that minimizes the cost function is} & x^* = 1\n", " \\end{array}\n", "}\n", "\\end{aligned}$\n", "\n", "In general, how an optimization problem is solved depends on its categorization (categories include Linear Programming, Quadratic Programming, Mixed-integer Programming, etc.). Categorization depends on properties of both the cost function $f(x)$ and the constraint set $\\mathcal{S}$. For example, if the cost function $f(x)$ is a linear function of $x$, and the constraint $\\mathcal{S}$ is a linear set $\\mathcal{S} = \\{x | Ax\\le b\\}$, then we have a *linear programming* problem, which is efficiently solved with certain solvers. \n", "\n", "There are multiple solvers for each category of optimization problems,\n", "but each solver has its own API and data structures.\n", "Frequently, users need to rewrite code when they switch solvers.\n", "To remedy this, Drake provides a common API through the *MathematicalProgram* class.\n", "In addition to avoiding solver-specific code,\n", "the constraint and cost functions can be written in symbolic form (which makes code more readable).\n", "In these ways, Drake's MathematicalProgram is akin to [YALMIP](https://yalmip.github.io/) in MATLAB or [JuMP](https://github.com/JuliaOpt/JuMP.jl) in Julia, and we support both Python and C++.\n", "
Note: Drake supports many [solvers](https://drake.mit.edu/doxygen_cxx/group__solvers.html)\n", "(some are open-source and some require a license).\n", "\n", "Drake can formulate and solve the following categories of optimization problems\n", "* Linear programming\n", "* Quadratic programming\n", "* Second-order cone programming\n", "* Nonlinear nonconvex programming\n", "* Semidefinite programming\n", "* Sum-of-squares programming\n", "* Mixed-integer programming (mixed-integer linear programming, mixed-integer quadratic programming, mixed-integer second-order cone programming).\n", "* Linear complementarity problem\n", "\n", "This tutorial provides the basics of Drake's MathematicalProgram.\n", "Advanced tutorials are available at the [bottom](#Advanced-tutorials) of this document.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Basics of MathematicalProgram class\n", "Drake's MathematicalProgram class contains the mathematical formulation of an optimization problem, namely the decision variables $x$, the cost function $f(x)$, and the constraint set $\\mathcal{S}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Initialize a MathematicalProgram object\n", "\n", " To initialize this class, first create an empty MathematicalProgram as" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pydrake.solvers import MathematicalProgram\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "# Create an empty MathematicalProgram named prog (with no decision variables, \n", "# constraints or cost function)\n", "prog = MathematicalProgram()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Adding decision variables\n", "Shown below, the function `NewContinuousVariables` adds two new continuous decision variables to `prog`. The newly added variables are returned as `x` in a numpy array. \n", "
Note the range of the variable is a continuous set, as opposed to binary variables which only take discrete value 0 or 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x = prog.NewContinuousVariables(2)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default names of the variable in *x* are \"x(0)\" and \"x(1)\". The next line prints the default names and types in `x`, whereas the second line prints the symbolic expression \"1 + 2x[0] + 3x[1] + 4x[1]\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(x)\n", "print(1 + 2*x[0] + 3*x[1] + 4*x[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To create an array `y` of two variables named \"dog(0)\"\" and \"dog(1)\", pass the name \"dog\" as a second argument to `NewContinuousVariables()`. Also shown below is the printout of the two variables in `y` and a symbolic expression involving `y`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y = prog.NewContinuousVariables(2, \"dog\")\n", "print(y)\n", "print(y[0] + y[0] + y[1] * y[1] * y[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To create a $3 \\times 2$ matrix of variables named \"A\", type " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "var_matrix = prog.NewContinuousVariables(3, 2, \"A\")\n", "print(var_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding constraints\n", "There are many ways to impose constraints on the decision variables. This tutorial shows a few simple examples. Refer to the links at the [bottom](#Advanced-tutorials) of this document for other types of constraints.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### AddConstraint\n", "The simplest way to add a constraint is with `MathematicalProgram.AddConstraint()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add the constraint x(0) * x(1) = 1 to prog\n", "prog.AddConstraint(x[0] * x[1] == 1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "You can also add inequality constraints to `prog` such as" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prog.AddConstraint(x[0] >= 0)\n", "prog.AddConstraint(x[0] - x[1] <= 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`prog` automatically analyzes these symbolic inequality constraint expressions and determines they are all *linear* constraints on $x$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Adding Cost functions\n", "In a complicated optimization problem, it is often convenient to write the total cost function $f(x)$ as a sum of individual cost functions\n", "\n", "$\\begin{aligned}\n", "f(x) = \\sum_i g_i(x)\n", "\\end{aligned}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "#### AddCost method.\n", "The simplest way to add an individual cost function $g_i(x)$ to the total cost function $f(x)$ is with the `MathematicalProgram.AddCost()` method (as shown below)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add a cost x(0)**2 + 3 to the total cost. Since prog doesn't have a cost before, now the total cost is x(0)**2 + 3\n", "prog.AddCost(x[0] ** 2 + 3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To add another individual cost function $x(0) + x(1)$ to the total cost function $f(x)$, simply call `AddCost()` again as follows" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "prog.AddCost(x[0] + x[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "now the total cost function becomes $x(0)^2 + x(0) + x(1) + 3$.\n", "\n", "`prog` can analyze each of these individual cost functions and determine that $x(0) ^ 2 + 3$ is a convex quadratic function, and $x(0) + x(1)$ is a linear function of $x$.\n", "\n", "You can render your program in LaTeX at any time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import display, Markdown\n", "display(Markdown(prog.ToLatex()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Solve the optimization problem\n", "Once all the decision variables/constraints/costs are added to `prog`, we are ready to solve the optimization problem.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Automatically choosing a solver\n", "The simplest way to solve the optimization problem is to call `Solve()` function. Drake's MathematicalProgram analyzes the type of the constraints/costs, and then calls an appropriate solver for your problem. The result of calling `Solve()` is stored inside the return argument. Here is a code snippet" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "Solves a simple optimization problem\n", " min x(0)^2 + x(1)^2\n", "subject to x(0) + x(1) = 1\n", " x(0) <= x(1)\n", "\"\"\"\n", "from pydrake.solvers import Solve\n", "# Set up the optimization problem.\n", "prog = MathematicalProgram()\n", "x = prog.NewContinuousVariables(2)\n", "prog.AddConstraint(x[0] + x[1] == 1)\n", "prog.AddConstraint(x[0] <= x[1])\n", "prog.AddCost(x[0] **2 + x[1] ** 2)\n", "\n", "# Now solve the optimization problem.\n", "result = Solve(prog)\n", "\n", "# print out the result.\n", "print(\"Success? \", result.is_success())\n", "# Print the solution to the decision variables.\n", "print('x* = ', result.GetSolution(x))\n", "# Print the optimal cost.\n", "print('optimal cost = ', result.get_optimal_cost())\n", "# Print the name of the solver that was called.\n", "print('solver is: ', result.get_solver_id().name())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that we can then retrieve optimization result from the return argument of `Solve`. For example, the solution $x^*$ is retrieved from `result.GetSolution()`, and the optimal cost from `result.get_optimal_cost()`.\n", "\n", "Some optimization solution is infeasible (doesn't have a solution). For example in the following code example, `result.get_solution_result()` will not report `kSolutionFound`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "An infeasible optimization problem.\n", "\"\"\"\n", "prog = MathematicalProgram()\n", "x = prog.NewContinuousVariables(1)[0]\n", "y = prog.NewContinuousVariables(1)[0]\n", "prog.AddConstraint(x + y >= 1)\n", "prog.AddConstraint(x + y <= 0)\n", "prog.AddCost(x)\n", "\n", "result = Solve(prog)\n", "print(\"Success? \", result.is_success())\n", "print(result.get_solution_result())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Manually choosing a solver\n", "\n", "If you want to choose a solver yourself, rather than Drake choosing one for you, you could instantiate a solver explicitly, and call its `Solve` function. There are two approaches to instantiate a solver. For example, if I want to solve a problem using the open-source solver [IPOPT](https://github.com/coin-or/Ipopt), I can instantiate the solver using either of the two approaches:\n", "1. The simplest approach is to call `solver = IpoptSolver()`\n", "2. The second approach is to construct a solver with a given solver ID as `solver = MakeSolver(IpoptSolver().solver_id())`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "Demo on manually choosing a solver\n", "Solves the problem\n", "min x(0)\n", "s.t x(0) + x(1) = 1\n", " 0 <= x(1) <= 1\n", "\"\"\"\n", "from pydrake.solvers import IpoptSolver\n", "prog = MathematicalProgram()\n", "x = prog.NewContinuousVariables(2)\n", "prog.AddConstraint(x[0] + x[1] == 1)\n", "prog.AddConstraint(0 <= x[1])\n", "prog.AddConstraint(x[1] <= 1)\n", "prog.AddCost(x[0])\n", "\n", "# Choose IPOPT as the solver.\n", "# First instantiate an IPOPT solver.\n", "\n", "solver = IpoptSolver()\n", "# The initial guess is [1, 1]. The third argument is the options for Ipopt solver,\n", "# and we set no solver options.\n", "result = solver.Solve(prog, np.array([1, 1]), None)\n", "\n", "print(result.get_solution_result())\n", "print(\"x* = \", result.GetSolution(x))\n", "print(\"Solver is \", result.get_solver_id().name())\n", "print(\"Ipopt solver status: \", result.get_solver_details().status,\n", " \", meaning \", result.get_solver_details().ConvertStatusToString())\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that `solver.Solve()` expects three input arguments, the optimization program `prog`, the initial guess of the decision variable values (`[1, 1]` in this case), and an optional setting for the solver (`None` in this case, we use the default IPOPT setting). If you don't have an initial guess, you could call `solver.Solve(prog)`. Drake will choose a default initial guess (a 0-valued vector), but this initial guess might be a bad starting point for optimization. Note from the following example code, with the default initial guess, the solver cannot find a solution, even though a solution exists (and could be found with initial guess [1, 1])." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pydrake.solvers import MakeSolver\n", "solver = MakeSolver(IpoptSolver().solver_id())\n", "result = solver.Solve(prog)\n", "print(result.get_solution_result())\n", "print(\"x* = \", result.GetSolution(x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also note that if we know which solver is called, then we can access some solver-specific result, by calling `result.get_solver_details()`. For example, `IpoptSolverDetails` contains a field `status`, namely the status code of the IPOPT solver, we could access this info by" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Ipopt solver status: \", result.get_solver_details().status,\n", " \", meaning \", result.get_solver_details().ConvertStatusToString())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each solver has its own details. You should refer to `FooSolverDetails` class on what is stored inside the return argument of `result.get_solver_details()`. For example, if you know that IPOPT is called, then refer to `IpoptSolverDetails` class; for OSQP solver, refer to `OsqpSolverDetails`, etc.\n", "\n", "### Using an initial guess\n", "Some optimization problems, such as nonlinear optimization, require an initial guess. Other types of problems, such as quadratic programming, mixed-integer optimization, etc, can be solved faster if a good initial guess is provided. The user could provide an initial guess as an input argument in `Solve` function. If no initial guess is provided, Drake will use a zero-valued vector as the initial guess.\n", "\n", "In the example below, we show that an initial guess could affect the result of the problem. Without an user-provided initial guess, the solver might be unable to find the solution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pydrake.solvers import IpoptSolver\n", "prog = MathematicalProgram()\n", "x = prog.NewContinuousVariables(2)\n", "prog.AddConstraint(x[0]**2 + x[1]**2 == 100.)\n", "prog.AddCost(x[0]**2-x[1]**2)\n", "solver = IpoptSolver()\n", "# The user doesn't provide an initial guess.\n", "result = solver.Solve(prog, None, None)\n", "print(f\"Without a good initial guess, the result is {result.is_success()}\")\n", "print(f\"solution {result.GetSolution(x)}\")\n", "# Pass an initial guess\n", "result = solver.Solve(prog, [-5., 0.], None)\n", "print(f\"With a good initial guess, the result is {result.is_success()}\")\n", "print(f\"solution {result.GetSolution(x)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more details on setting the initial guess, the user could refer to [Nonlinear program](./nonlinear_program.ipynb) section `Setting the initial guess`.\n", "\n", "\n", "## Add callback\n", "Some solvers support adding a callback function in each iteration. One usage of the callback is to visualize the solver progress in the current iteration. `MathematicalProgram` supports this usage through the function `AddVisualizationCallback`, although the usage is not limited to just visualization, the callback function can do anything. Here is an example" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Visualize the solver progress in each iteration through a callback\n", "# Find the closest point on a curve to a desired point.\n", "\n", "fig = plt.figure()\n", "curve_x = np.linspace(1, 10, 100)\n", "ax = plt.gca()\n", "ax.plot(curve_x, 9./curve_x)\n", "ax.plot(-curve_x, -9./curve_x)\n", "ax.plot(0, 0, 'o')\n", "x_init = [4., 5.]\n", "ax.plot(x_init[0], x_init[1], 'x', color='red')\n", "\n", "def update(x):\n", " ax.plot(x[0], x[1], 'x', color='red')\n", " \n", "prog = MathematicalProgram()\n", "x = prog.NewContinuousVariables(2)\n", "prog.AddConstraint(x[0] * x[1] == 9)\n", "prog.AddCost(x[0]**2 + x[1]**2)\n", "prog.AddVisualizationCallback(update, x)\n", "result = Solve(prog, x_init)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Advanced tutorials\n", "[Setting solver parameters](./solver_parameters.ipynb)\n", "\n", "[Updating costs and constraints (e.g. for efficient solving of many similar programs)](./updating_costs_and_constraints.ipynb)\n", "\n", "[Debugging tips](./debug_mathematical_program.ipynb)\n", "\n", "[Linear program](./linear_program.ipynb)\n", "\n", "[Quadratic program](./quadratic_program.ipynb)\n", "\n", "[Nonlinear program](./nonlinear_program.ipynb)\n", "\n", "[Sum-of-squares optimization](./sum_of_squares_optimization.ipynb)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" } }, "nbformat": 4, "nbformat_minor": 1 }