{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Optimization Examples: Geometric Programming (GP)\n", "\n", "* A function $f: \\mathbb{R}^n \\mapsto \\mathbb{R}$ with $\\text{dom} f = \\mathbb{R}_{++}^n$ defined as\n", "\\begin{eqnarray*}\n", "\tf(\\mathbf{x}) = c x_1^{a_1} x_2^{a_2} \\cdots x_n^{a_n},\n", "\\end{eqnarray*}\n", "where $c>0$ and $a_i \\in \\mathbb{R}$, is called a _monomial_.\n", "\n", "* A sum of monomials\n", "\\begin{eqnarray*}\n", "\tf(\\mathbf{x}) = \\sum_{k=1}^K c_k x_1^{a_{1k}} x_2^{a_{2k}} \\cdots x_n^{a_{nk}},\n", "\\end{eqnarray*}\n", "where $c_k > 0$, is called a _posynomial_.\n", "\n", "* Posynomials are closed under addition, multiplication, and nonnegative scaling.\n", "\n", "* A **geometric program** is of form\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& f_0(\\mathbf{x}) \\\\\n", "\t&\\text{subject to}& f_i(\\mathbf{x}) \\le 1, \\quad i=1,\\ldots,m \\\\\n", "\t& & h_i(\\mathbf{x}) = 1, \\quad i=1,\\ldots,p\n", "\\end{eqnarray*}\n", "where $f_0, \\ldots, f_m$ are posynomials and $h_1, \\ldots, h_p$ are monomials. The constraint $\\mathbf{x} \\succ \\mathbf{0}$ is implicit.\n", "\n", " Is GP a convex optimization problem?\n", "\n", "* With change of variable $y_i = \\ln x_i$, a monomial\n", "\\begin{eqnarray*}\n", "\tf(\\mathbf{x}) = c x_1^{a_1} x_2^{a_2} \\cdots x_n^{a_n}\n", "\\end{eqnarray*}\n", "can be written as\n", "\\begin{eqnarray*}\n", "\tf(\\mathbf{x}) = f(e^{y_1}, \\ldots, e^{y_n}) = c (e^{y_1})^{a_1} \\cdots (e^{y_n})^{a_n} = e^{\\mathbf{a}^T \\mathbf{y} + b},\n", "\\end{eqnarray*}\n", "where $b = \\ln c$. Similarly, we can write a posynomial as\n", "\\begin{eqnarray*}\n", "\tf(\\mathbf{x}) &=& \\sum_{k=1}^K c_k x_1^{a_{1k}} x_2^{a_{2k}} \\cdots x_n^{a_{nk}} \\\\\n", "\t&=& \\sum_{k=1}^K e^{\\mathbf{a}_k^T \\mathbf{y} + b_k},\n", "\\end{eqnarray*}\n", "where $\\mathbf{a}_k = (a_{1k}, \\ldots, a_{nk})$ and $b_k = \\ln c_k$.\n", "\n", "* The original GP can be expressed in terms of the new variable $\\mathbf{y}$\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& \\sum_{k=1}^{K_0} e^{\\mathbf{a}_{0k}^T \\mathbf{y} + b_{0k}} \\\\\n", "\t&\\text{subject to}& \\sum_{k=1}^{K_i} e^{\\mathbf{a}_{ik}^T \\mathbf{y} + b_{ik}} \\le 1, \\quad i = 1,\\ldots,m \\\\\n", "\t& & e^{\\mathbf{g}_i^T \\mathbf{y} + h_i} = 1, \\quad i=1,\\ldots,p,\n", "\\end{eqnarray*}\n", "where $\\mathbf{a}_{ik}, \\mathbf{g}_i \\in \\mathbb{R}^n$. Taking log of both objective and constraint functions, we obtain the _geometric program in convex form_\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& \\ln \\left(\\sum_{k=1}^{K_0} e^{\\mathbf{a}_{0k}^T \\mathbf{y} + b_{0k}}\\right) \\\\\n", "\t&\\text{subject to}& \\ln \\left(\\sum_{k=1}^{K_i} e^{\\mathbf{a}_{ik}^T \\mathbf{y} + b_{ik}}\\right) \\le 0, \\quad i = 1,\\ldots,m \\\\\n", "\t& & \\mathbf{g}_i^T \\mathbf{y} + h_i = 0, \\quad i=1,\\ldots,p.\n", "\\end{eqnarray*}\n", "\n", "* Mosek is capable of solving GP. cvx has a GP mode that recognizes and transforms GP problems.\n", "\n", "* Example. Logistic regression as GP. Given data $(\\mathbf{x}_i, y_i)$, $i=1,\\ldots,n$, where $y_i \\in \\{0, 1\\}$ and $\\mathbf{x}_i \\in \\mathbb{R}^p$, the likelihood of the logistic regression model is\n", "\\begin{eqnarray*}\n", "\t& & \\prod_{i=1}^n p_i^{y_i} (1 - p_i)^{1 - y_i} \\\\\n", "\t&=& \\prod_{i=1}^n \\left( \\frac{e^{\\mathbf{x}_i^T \\beta}}{1 + e^{\\mathbf{x}_i^T \\beta}} \\right)^{y_i} \\left( \\frac{1}{1 + e^{\\mathbf{x}_i^T \\beta}} \\right)^{1 - y_i} \\\\\n", "\t&=& \\prod_{i:y_i=1} e^{\\mathbf{x}_i^T \\beta y_i} \\prod_{i=1}^n \\left( \\frac{1}{1 + e^{\\mathbf{x}_i^T \\beta}} \\right).\n", "\\end{eqnarray*}\n", "The MLE solves\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& \\prod_{i:y_i=1} e^{-\\mathbf{x}_i^T \\beta} \\prod_{i=1}^n \\left( 1 + e^{\\mathbf{x}_i^T \\beta} \\right).\n", "\\end{eqnarray*}\n", "Let $z_j = e^{\\beta_j}$, $j=1,\\ldots,p$. The objective becomes\n", "\\begin{eqnarray*}\n", "\t\\prod_{i:y_i=1} \\prod_{j=1}^p z_j^{-x_{ij}} \\prod_{i=1}^n \\left( 1 + \\prod_{j=1}^p z_j^{x_{ij}} \\right).\n", "\\end{eqnarray*}\n", "This leads to a GP\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& \\prod_{i:y_i=1} s_i \\prod_{i=1}^n t_i \\\\\n", "\t&\\text{subject to}& \\prod_{j=1}^p z_j^{-x_{ij}} \\le s_i, \\quad i = 1,\\ldots,m \\\\\n", "\t& & 1 + \\prod_{j=1}^p z_j^{x_{ij}} \\le t_i, \\quad i = 1, \\ldots, n,\n", "\\end{eqnarray*}\n", "in variables $\\mathbf{s} \\in \\mathbb{R}^{m}$, $\\mathbf{t} \\in \\mathbb{R}^n$, and $\\mathbf{z} \\in \\mathbb{R}^p$. Here $m$ is the number of observations with $y_i=1$.\n", "\n", " How to incorporate lasso penalty? Let $z_j^+ = e^{\\beta_j^+}$, $z_j^- = e^{\\beta_j^-}$. Lasso penalty takes the form $e^{\\lambda |\\beta_j|} = (z_j^+ z_j^-)^\\lambda$.\n", "\n", "* Example. Bradley-Terry model for sports ranking. See ST758 HW8 . The likelihood is\n", "\\begin{eqnarray*}\n", "\t\\prod_{i, j} \\left( \\frac{\\gamma_i}{\\gamma_i + \\gamma_j} \\right)^{y_{ij}}.\n", "\\end{eqnarray*}\n", "MLE is solved by GP\n", "\\begin{eqnarray*}\n", "\t&\\text{minimize}& \\prod_{i,j} t_{ij}^{y_{ij}} \\\\\n", "\t&\\text{subject to}& 1 + \\gamma_i^{-1} \\gamma_j \\le t_{ij}\n", "\\end{eqnarray*}\n", "in $\\gamma \\in \\mathbb{R}^n$ and $\\mathbf{t} \\in \\mathbb{R}^{n^2}$." ] } ], "metadata": { "@webio": { "lastCommId": null, "lastKernelId": null }, "kernelspec": { "display_name": "Julia 1.1.0", "language": "julia", "name": "julia-1.1" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.1.0" }, "toc": { "colors": { "hover_highlight": "#DAA520", "running_highlight": "#FF0000", "selected_highlight": "#FFD700" }, "moveMenuLeft": true, "nav_menu": { "height": "30.66666603088379px", "width": "252px" }, "navigate_menu": true, "number_sections": true, "sideBar": true, "skip_h1_title": true, "threshold": 4, "toc_cell": false, "toc_section_display": "block", "toc_window_display": true, "widenNotebook": false } }, "nbformat": 4, "nbformat_minor": 2 }