{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "# $\\quad$ Lecture 21 - Revision \n", "***\n", "$\\newcommand{\\vct}[1]{\\mathbf{#1}}$\n", "$\\newcommand{\\mtx}[1]{\\mathbf{#1}}$\n", "$\\newcommand{\\e}{\\varepsilon}$\n", "$\\newcommand{\\norm}[1]{\\|#1\\|}$\n", "$\\newcommand{\\minimize}{\\text{minimize}\\quad}$\n", "$\\newcommand{\\maximize}{\\text{maximize}\\quad}$\n", "$\\newcommand{\\subjto}{\\quad\\text{subject to}\\quad}$\n", "$\\newcommand{\\R}{\\mathbb{R}}$\n", "$\\newcommand{\\trans}{T}$\n", "$\\newcommand{\\ip}[2]{\\langle {#1}, {#2} \\rangle}$\n", "$\\newcommand{\\zerovct}{\\vct{0}}$\n", "$\\newcommand{\\diff}[1]{\\mathrm{d}{#1}}$\n", "$\\newcommand{\\conv}{\\operatorname{conv}}$\n", "$\\newcommand{\\inter}{{\\operatorname{int}}}$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this final lecture we address several topics related to the exam. The main talking points are listed below. Other than that, the past exam, the midterm test, and Part A of the problem sheets should serve as good guidance." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 1 Vectors and Matrices\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It is essential that you are comfortable with manipulating vectors and matrices, and be able to easily compute gradients of multivariate functions. If, for example, $\\mtx{A}\\in \\R^{n\\times n}$, $\\vct{w}\\in \\R^n$ and $\\vct{x}\\in \\R^n$, then\n", "\n", "\\begin{equation*}\n", " \\vct{w}^{\\trans}\\mtx{A}\\vct{x} = \\vct{x}^{\\trans}\\mtx{A}^{\\trans}\\vct{w} = \\ip{\\vct{w}}{\\mtx{A}\\vct{x}} = \\ip{\\mtx{A}^{\\trans}\\vct{w}}{\\vct{x}}.\n", "\\end{equation*}\n", "\n", "The gradient with respect to $\\vct{x}$ of $f(\\vct{x}) = \\vct{w}^{\\trans}\\mtx{A}\\vct{x}$ is\n", "\n", "\\begin{equation*}\n", " \\nabla_{\\vct{x}} f(\\vct{x}) = \\mtx{A}^{\\trans}\\vct{w}, \n", "\\end{equation*}\n", "\n", "Note that in particular, the gradient $\\nabla_{\\vct{x}}$ of a linear form $\\vct{w}^{\\trans}\\vct{x}$ is $\\vct{w}$, and the gradient $\\nabla_{\\vct{w}}$ is $\\vct{x}$. For a quadratic function $f(\\vct{x})=\\vct{x}^{\\trans}\\mtx{A}\\vct{x}$, \n", "\n", "\\begin{equation*}\n", "\\nabla_{\\vct{x}} = 2\\mtx{A}\\vct{x}.\n", "\\end{equation*}\n", "\n", "It is also recommended to have a look at the Preliminaries document, which is included as appendix in the complete set of lecture notes. If in doubt, you may want to write the functions out and work as in Example 2.11" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 2 Convex Sets and Convex Functions\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have seen various criteria for sets to be convex, and for functions to be convex.\n", "\n", "**Convexity of functions.**\n", "* Apply definition, $f(\\lambda\\vct{x}+(1-\\lambda)\\vct{y})\\leq \\lambda f(\\vct{x})+(1-\\lambda) f(\\vct{y})$ for $\\lambda\\in [0,1]$;\n", "* Use composition rules (multiplication with scalar, addition);\n", "* If $f$ is once or twice differentiable, use one of the criteria involving the gradient or the Hessian.\n", "\n", "**Convexity of sets.**\n", "* Apply definition, $\\lambda\\vct{x}+(1-\\lambda)\\vct{y}\\in C$ if $\\vct{x},\\vct{y}\\in C$ and $\\lambda\\in [0,1]$;\n", "* Check if $C=\\{\\vct{x}\\in \\R^n\\mid f(\\vct{x})\\leq c$ for some convex function $f$;\n", "* Characterise as sum or intersection of known convex sets.\n", "\n", "It helps to have a collection of functions and sets in mind that are convex. When determining whether a set in $\\R^2$ is convex (or a function of one variable), a small sketch may help.\n", "\n", "For convex functions taking values in $\\R^2$, one should be able to sketch the level sets." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 3 Iterative Algorithms and Convergence\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have seen a class of algorithms called *descent algorithms*, of which the most important are\n", "* Gradient Descent;\n", "* Newton's Method.\n", "\n", "Note that we encountered Newton's method in two guises: as a minimization algorithm, and as a root-finding algorithm. The minimization version of Newton's algorithm for a function $f$ is merely the root-finding version of Newton's method for the gradient $\\nabla f$.\n", "\n", "For the descent methods considered, it will be important to \n", "* be able to carry them out by hand;\n", "* know about types of stopping criteria (gradient small or difference small);\n", "* know about how to select a step length.\n", "\n", "Associated to iterative algorithms is the notion of Rates of Convergence. You should be able to recognise the rates of convergence of the algorithms discusses, as well as for sequences of numbers. When do Gradient Descent or Newton's Method fail to converge?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 4 Linear Programming\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have seen LP from the point of view of theory and from the point of view of applications. \n", "\n", "**Applications.**\n", "* Know how to convert a text problem into a linear programming problem (see introductory example from first lecture, or Question 1 in last year's exam;\n", "* Know how to transform other problems (such as those involving the $1$ or $\\infty$ norm) to equivalent LP problems.\n", "\n", "One should be able to easily switch between primal and dual formulations. \n", "\n", "**Algorithms / solving.**\n", "* Know how to sketch the feasible set and solve the problem graphically in small dimensions;\n", "* Identify and describe the central path for a specific problem;\n", "* Know the significance of the parameters appearing in the central path;\n", "* Be able to transform LPs from one form to another (see beginning of Lecture 11, for example).\n", "\n", "**Theory.**\n", "* Separating Hyperplane Theorem;\n", "* Farkas' Lemma;\n", "* Properties of polyhedra, vertices and faces;\n", "* Primal and Dual version of an LP;\n", "* Optimality conditions.\n", "\n", "One should also be familiar in working with convex or conic hulls of points. For example, given vectors $\\vct{a}_1,\\dots,\\vct{a}_p\\in \\R^n$, the set of points\n", "\\begin{equation*}\n", " \\sum_{i=1}^p x_i \\vct{a}_i, \\quad x_i\\geq 0\n", "\\end{equation*}\n", "is a convex set, a convex cone, and can be written as $\\mtx{A}\\vct{x}$, $\\vct{x}\\geq \\zerovct$. \n", "\n", "Also, if the matrix $\\mtx{A}$ consists of *rows* $\\vct{a}_i^{\\trans}$, then the set of $\\vct{x}$ such that $\\mtx{A}\\vct{x}\\leq \\zerovct$ is the same as the set of $\\vct{x}$ such that $\\ip{\\vct{x}}{\\vct{a}_i}\\leq 0$ for all $i$." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "---\n", "### 5 Non-linear Convex Optimization \n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "An important point to notice here is the analogy / similarity to linear programming theory. \n", "\n", "**Theory (in order of complexity).**\n", "* Lagrange multipliers;\n", "* Be able to derive the Lagrange Dual of a function (keep in mind the domain!);\n", "* Be able to write down the KKT conditions.\n", "\n", "The above tasks can only be \n", "\n", "It can be useful to remember some special cases. These include:\n", "* Linear programming duality;\n", "* General quadratic programming problems;\n", "* Quadratic programming with only equality constraints;\n", "\n", "The definition of the **barrier function** is a feature that we haven't strictly seen in LP and is important in the context of non-linear convex optimization. It can be helpful to be able to switch quickly between matrix notation and just listing all the constraints individually!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 6 Semidefinite programming \n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are two things to remember with respect to semidefinite programming:\n", "* Basic properties of positive semidefinite matrices (eigenvalues non-negative, definition);\n", "* That the formulas look exactly the same as in linear programming, with the inner product replace by $\\bullet$ (the trace inner product), and $\\geq$ replaces by $\\succeq$.\n", "\n", "As for examples, one should know how to transfrom simple problems into semidefinite ones. For example:\n", "* LP as special case of semidefinite programming;\n", "* Some quadratic problems as semidefinite (Exam question 3(d) or Problem 8(3))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---\n", "### 7 Applications \n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Dispersed throughout the course we had a look at various applications. These included\n", "* Machine learning basics (supervised learning, classification) -> unconstrained optimization;\n", "* Portfolio optimization -> quadratic programming;\n", "* Support vector machines and linear separation -> quadratic programming;\n", "* Finding a maximal cut in a network -> semidefinite programming.\n", "\n", "It is not so crucial to remember every detail in the treatment of these examples, but one should be able to explain the main ideas. In particular, how exactly the different optimization types enter, what are some the problems that can arise, and what are potential concrete applications.\n", "\n", "**Thought experiment.** Imagine you need to explain to someone who is reasonably mathematically literate some of these applications, and how the different optimization types (gradient descent, quadratic programming, SDP) enter the picture. That is about the level of detail you need to remember." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python [conda root]", "language": "python", "name": "conda-root-py" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }