{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "$\\LaTeX \\text{ commands here}\n", "\\newcommand{\\R}{\\mathbb{R}}\n", "\\newcommand{\\im}{\\text{im}\\,}\n", "\\newcommand{\\norm}[1]{||#1||}\n", "\\newcommand{\\inner}[1]{\\langle #1 \\rangle}\n", "\\newcommand{\\span}{\\mathrm{span}}\n", "\\newcommand{\\proj}{\\mathrm{proj}}\n", "\\newcommand{\\OPT}{\\mathrm{OPT}}\n", "\\newcommand{\\grad}{\\nabla}\n", "\\newcommand{\\eps}{\\varepsilon}\n", "$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "
\n", "\n", "**Georgia Tech, CS 4540**\n", "\n", "# L10: Online Convex Optimization\n", "\n", "Jake Abernethy\n", "\n", "Quiz password: online\n", "\n", "*Tuesday, September 24, 2019*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Introduction to Online Convex Optimization\n", "\n", "It became clear in the early 2000s that a lot of the tools that were being developed in sequential (online) learning problems were part of a generic framework. Researchers realized that you could generalize all of these ideas to a master setting. The basic setup is that you're given a convex and bounded decision set $K \\subset \\mathbb{R}^d$, and you will execute the following protocol.\n", "\n", "* For $t=1, \\ldots, T$\n", " + Alg selects $x_t \\in K$\n", " + Alg observes convex loss function $f_t : K \\to \\mathbb{R}$\n", "\n", "*Objective*: Minimize the regret, given by\n", "$$\n", "\\text{Regret}_T := \\sum_{t=1}^T f_t(x_t) - \\min_{x \\in K} \\sum_{t=1}^T f_t(x)\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Recall the Hedge Setting\n", "\n", "In the \"online learning with experts\" setting, we imagined a setting where, on each round, an algorithm picks a distribution $p_t$ over the $N$ experts, and then observes a loss $\\ell_t(i)$ for each expert $i$. The goal is to minimize the long-term expected loss versus the loss of the best expert: \n", "$$\\sum_{t=1}^T \\left(\\sum_i p_t(i) \\ell_t(i)\\right) - \\min_i \\sum_{t=1}^T \\ell_t(i)$$\n", "\n", "#### Problem\n", "\n", "Show that the Hedge setting is a special case of OCO" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Investing your money\n", "\n", "We like to invest our money, but we don't always know the best way to invest. Let's say the stock market (say, the S&P500) fluctuates by $r_t \\in (0,\\infty)$, where $r_t = 1$ means the market doesn't change, and $r_t = 1.01$ means the market went up by 1%.\n", "\n", "You are also welcome to invest some of your own cash in the stock market, and you start out with $C$ dollars. If you decided to invest a $p$ fraction of your money in the S&P500 every day, and keep $1-p$ as cash on hand, then you'd earn\n", "$C\\prod_{t=1}^T ((1-p) + pr_t)$\n", "\n", "Of course, you can change your investment every day, putting a $p_t$ fraction of your money in the market on day $t$. Then you'd earn $C\\prod_{t=1}^T ((1-p_t) + p_tr_t)$ over $T$ rounds. \n", "\n", "#### Problem\n", "\n", "Let's say your goal is to maximize your overall wealth earnings relative to the best fixed portfolio in hindsight. That is, you want to maximize\n", "$$\\frac{C\\prod_{t=1}^T ((1-p_t) + p_tr_t)}{\\max_{p\\in [0,1]}C\\prod_{t=1}^T ((1-p) + pr_t)}$$\n", "Can you turn this into an OCO problem?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Reduction to the linear setting\n", "\n", "Here's a trick that gets used a lot: instead of using your original sequence of functions $f_1, \\ldots, f_T$ let's use an *alternative* sequence of functions, that will depend on your choice of $x_t$'s along the way.\n", "\n", "Let $h_t(x) := f_t(x_t) + \\langle \\nabla f_t(x_t), x - x_t \\rangle$. (Note that I can only define this sequence once I have committed to my choice of $x_t$'s!)\n", "\n", "#### Problem\n", "\n", "Can you relate $\\text{Regret}_T(h_1, \\ldots, h_T)$ and $\\text{Regret}_T(f_1, \\ldots, f_T)$?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Online Gradient Descent (OGD)\n", "\n", "You've now read about the OGD algorithm:\n", "* Initialize $x_1 \\in K$ to be any point\n", "* For $t=1, \\ldots, T$:\n", " + Observe $f_t(\\cdot)$\n", " + Update $x_{t+1} = x_t - \\eta_t \\nabla f_t(x_t)$\n", "\n", "**Theorem**: One can show that, as long as the functions $f_t$ are $G$-lipschitz, and the set $K$ has diameter $D$, then\n", "$$\\text{Regret}_T(\\text{OGD}) \\leq \\frac 3 2 GD\\sqrt{T}$$\n", "\n", "#### Problem: OGD and Experts\n", "**Question**: Can we apply OGD to the Hedge setting? If so, why might not this be the best choice of algorithm?\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Convergence bound for Gradient Descent\n", "\n", "You will notice that the OGD algorithm looks very similar to the standard GD algorithm for minimizing a fixed convex lipschitz function $\\Phi(\\cdot)$. But keep in mind the differences here: OGD uses a sequence of functions, whereas GD operates on only a single function.\n", "\n", "#### Problem\n", "\n", "How can we used the regret bound for OGD to get a convergence bound for (iterate-averaged) gradient descent? *Hint*: You can use $\\Phi(\\cdot)$ more than once! And you might find Jensen useful." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Alternative: Follow the Leader\n", "\n", "Another algorithm that *sometimes* works well is known as *Follow the Leader* (FTL). It's defined as follows:\n", "$$x_t := \\arg\\min_{x \\in K} \\sum_{s=1}^{t-1} f_s(x)$$\n", "\n", "But this algorithm doesn't always work well. It can achieve *linear regret*. Can you construct an example where this occurs?" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "#### Answer\n", "\n", "Suppose we have $2$ experts. On the first round neither of them has ever made mistakes so the decision maker picks\n", "one of them at random. At that point the adversary selects the other expert's advice to be the correct one \n", "and thus the chosen expert has made $1$ mistake and the decision maker has made $1$ mistake.\n", "\n", "On the second round the decision maker will pick the advice of the expert that hasn't made a mistake.\n", "Then the adversary selects the the other expert's advice to be correct and now both experts have $1$ mistake each \n", "while the decision maker has made $2$ mistakes.\n", "\n", "This sequence of choices repeats for as long as the game lasts. Suppose this game continues for \n", "$T$ rounds. Then the decisions maker will have made $T$ mistakes while the best expert will have made\n", "$\\lceil T/2 \\rceil$ mistakes.\n", "\n", "Let regret be defined as\n", "$$\n", " Regret_T = M_T - \\min_i M_T(i)\n", "$$\n", "where $M_T$ is the number of mistakes made by the decision maker, $M_T(i)$ is the number of mistakes\n", "made by expert $i$. Thus regret is the number of mistakes made by the decision maker minus the\n", "number of mistakes made by the best expert in hindsight.\n", "\n", "Therefore, in the *Follow The Leader* setting described above the regret of the algorithm will be \n", "$$\n", " Regret_T = M_T - \\min_i M_T(i) = T - \\lceil T/2 \\rceil = \\lfloor T/2 \\rfloor\n", "$$\n", "which is $\\Theta(T)$ – linear in $T$." ] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" } }, "nbformat": 4, "nbformat_minor": 4 }