% Notes for NOPT048 Linear Programming and Combinatorial Optimization % Charles University in Prague, Summer 2019 % Instructor: Prof. Hans Raj Tiwary % Notes by Marcel Goh \input fontmac \input mathmac \def\conv{\hbox{conv}\,} \def\cone{\hbox{cone}\,} \def\aff{\hbox{aff}\,} \def\maximise{\hbox{maximise}\hfill} \def\minimise{\hbox{minimise}\hfill} \def\subject{\hbox{subject to}\hfill} \def\hcr{\hfill\cr} \leftrighttop{NOPT048 LINEAR PROGRAMMING AND COMBINATORIAL OPTIMIZATION}{MARCEL GOH} \maketitle{NOPT048 Linear Programming and Combinatorial Optimization\footnote{$^*$}{\eightpt Course given by Prof. Hans Raj Tiwary at Charles University in Prague}}{Notes by}{Marcel K. Goh {\rm (Prague, Czech Rep.)}}{9 June 2019} \section 1. DEFINITIONS AND THEOREMS \vskip -\medskipamount \section 1.1. Linear and integer programs The following is a {\it linear program}: $$\matrix{ & \maximise & \quad & c^Tx\hcr & \subject & & Ax \leq b.\hfill }$$ Here $x, b, c\in \R^n$ and $A$ is an $m\times n$ matrix over $\R$. The function $c^Tx$ is called the {\it objective function} and it is what the linear program is trying to maximise (or minimise). Any vector $x$ that satisfies the inequality $Ax \leq b$ is called a {\it feasible solution}. A vector $x^*$ is called an {\it optimum solution} if $c^Tx^* \geq c^Tx$ for all feasible solutions $x$. A linear program is called {\it infeasible} if there are no feasible solutions. Even if a linear program is feasible, it might be {\it unbounded}. This happens when its objective function can be made arbitrarily ``good''. In a certain sense, linear programs are easy to solve. Adding an integrality constraint to a linear program gives an {\it integer program}: $$\matrix{ & \maximise & \quad & c^Tx \hcr & \subject & & Ax \leq b,\hcr & & & x \in \Z^n.\hfill }$$ Integer programs are much harder to solve in the general case. Suppose we want to solve the following integer program: $$\matrix{ & \maximise & \quad & c^Tx \hcr & \subject & & Ax \leq b,\hcr & & & x \in \{0,1\}.\hfill }$$ It is often a good strategy to solve the {\it linear program relaxation} of the integer program, which gives an approxmiate answer: $$\matrix{ & \maximise & \quad & c^Tx \hcr & \subject & & Ax \leq b,\hcr & & & 0\leq x\leq 1.\hfill }$$ Then we can take the solution of the linear program and round each element to the nearest integer value to get a solution of the original integer program. In some cases, the linear program relaxation gives quite a good answer. For example, if $G$ is a graph with minimum vertex cover of size $n$, then formulating an integer program and solving its linear program relaxation will spit out a vertex cover $S$ with $|S|\leq 2n$. \section 1.2. Geometric interpretation The inequality $Ax\leq b$ defines a {\it polyhedron}. More precisely, we say that a polyhedron is the intersection of finitely many {\it halfspaces}, where a halfspace is the set $\{x : a^Tx\leq b\}$ for some $a\in \R^n$ and $b\in \R$. If a polyhedron is bounded, we call it a {\it polytope}. A polyhedron is closed and convex. The finite intersection of closed sets is closed and the intersection of convex sets is convex. Let $S\subset \R^n$ be a a set of $m$ points. Then the {\it convex hull} of $S$ is the set $$\conv S = \{x : x = \displaystyle\sum_{i=1}^m\lambda_i s_i, \displaystyle\sum_{i=1}^m\lambda_i = 1, \lambda_i \geq 0\}.$$ The {\it conic hull} of $S$ is the set $$\cone S = \{x : x = \displaystyle\sum_{i=1}^m\lambda_i s_i, \lambda_i\geq 0\}.$$ The {\it affine hull} of $S$ is the set $$\aff S = \{x : x = \displaystyle\sum_{i=1}^m\lambda_i s_i, \displaystyle\sum_{i=1}^m\lambda_i = 1\}.$$ The convex hull can be obtained from the conic hull by intersecting it with a hyperplane. The affine hull is the smallest-dimensional subspace that contains all the points in $S$. Let $P = \{x: Ax\leq b\}$ be a polyhedron. An inequality $\alpha^Tx \leq \beta$ is said to be a {\it valid inequality} if and only if for all $y\in P$, $\alpha^T y\leq \beta$. Then a subset $F\subset P$ is a {\it face} of $P$ if and only if there exists a valid inequality $\alpha^T x\leq \beta$ such that $F = P\cap \{x : \alpha^T x\leq \beta\}$. The trivial faces of any polyhedron $P$ are $\emptyset$ and $P$ itself (formed by intersecting $P$ with the valid inequalities $0^Tx \leq 1$ and $0^Tx\leq 0$ respectively). Any face that is not trivial is called {\it proper}. The {\it dimension} of a polyhedron $P$, denoted $\dim P$, is the largest dimension of a ball inside $P$. A {\it vertex} is a face of dimension 0, an {\it edge} is a face of dimension 1, and a {\it facet} is a face of dimension $\dim P - 1$. A point $x\in P$ is called an {\it extreme point} if it is not a convex combination of points of $P$. Note that a point $x$ is an extreme point if and only if it is a vertex. \parenproclaim Theorem M (Minkowski-Weyl). A set of points $P$ is a polyhedron if and only if there exist finite sets $V$ and $Y$ such that $P = \conv V + \cone Y$ where the sum $A + B$ denotes the set $\{a + b : a\in A, b\in B\}$. \section 1.3. Simplex method A polyhedron is {\it pointed} if it does not contain a line. \proclaim Lemma P. Every non-empty pointed polyhedron $P$ has at least one vertex. \proclaim Theorem E. Consider the linear program $$\matrix{ & \hbox{\rm maximise}\hfill & \quad & c^Tx \hcr & \hbox{\rm subject to}\hfill & & Ax \leq b.\hfill }$$ If the polyhedron $P$ defined by $Ax\leq b$ is pointed and if an optimum exists, then there exists a vertex of $P$ that attains the maximum. The result of the above statements is an algorithm for solving linear programs, called the {\it simplex method}. Here is the general idea: \algbegin Algorithm X (Simplex method). Find an optimum solution of a linear program. \algstep X1. Find an initial vertex. It is a 0-dimensional face with some 1-dimensional faces incident on it. \algstep X2. At any vertex, check for optimality. If optimal, we are done. \algstep X3. If the vertex is not optimal, increase the value of the objective function by following an edge to a new vertex. Return to step X2.\slug Let $A$ be an $m\times n$ matrix, $b\in \R^m$, $x \in\R^n$, with $n \geq m$. Let $B\subset [n] =\{1,\ldots,n\}$ of cardinality $m$. Then $A_B$ denotes the square matrix formed by selecting the columns of $A$ indexed by the elements of $B$ and likewise $x_B$ denotes the vector of length $m$ formed by selecting the elements indexed by the elements of $B$. We define a vector $x$ to be a {\it basic feasible solution} of the linear program $$\matrix{ & \maximise & \quad & c^Tx \hcr & \subject & & Ax \leq b\hfill }$$ if and only if there exists $B\subset [n]$ with $|B|= m$ such that: \medskip \item {i)} $A_B$ is non-singular; \smallskip \item {ii)} $x_j = 0$ for all $j\notin B$; \smallskip \item {iii)} $x_B = A_B^{-1}b$. \medskip A set $B\subset [n]$ is called a {\it basis} if and only if $A_B$ is non-singular, and $B$ is a {\it feasible basis} if the unique solution of $A_Bx_B=b$ is non-negative. Note that a basic feasible solution is uniquely determined by the feasible basis $B$, but a vertex does {\it not} have to be determined by a unique basis. A matrix $A$ is {\it totally unimodular} if and only if every square submatrix of $A$ has determinant 0, 1, or $-1$. \section 1.4. Duality As always, we start with the linear program $$\matrix{ & \maximise & \quad & c^Tx \hcr & \subject & & Ax \leq b,\hcr & & & x \geq 0.\hfill\hfill }$$ Inequalities of a linear program can be added together without changing the feasible region. Likewise, one can also multiply any inequality by a non-negative scalar. So we can manipulate inequalities to get an upper bound on the optimum value of the objective function. We want to pick coefficients to get the {\it least} upper bound possible. Let $y$ denote a vector of coefficients. We want to $$\matrix{ & \minimise & \quad & b^Ty \hcr & \subject & & A^Ty \geq c,\hcr & & & y \geq 0.\hfill }$$ The original linear program is called the {\it primal} and this new one is called the {\it dual}. Any feasible solution of the dual linear program gives an upper bound on the primal and the optimum of the dual is equal to the optimum of the original program. \parenproclaim Theorem W (Weak Duality Theorem). Let $P$ and $D$ be primal and dual linear programs of the above form and let $x$ and $y$ be respective feasible solutions. Then $c^Tx \leq b^Ty$. \parenproclaim Theorem S (Strong Duality Theorem). Let $P$ and $D$ be primal and dual linear programs. Then exactly one of the following holds: \medskip \item {i)} P infeasible, D infeasible \smallskip \item {ii)} P infeasible, D unbounded \smallskip \item {iii)} P unbounded, D infeasible \smallskip \item {iv)} P and D both feasible and bounded and furthermore, if $x^*$ and $y^*$ are primal and dual optimal solutions, then $c^Tx^* = b^Ty^*$. \medskip \parenproclaim Lemma F (Farkas lemma). For an arbitrary matrix $A$ and vector $b$, exactly one of the following holds: \medskip \item {i)} There exists a vector $x$ such that $Ax = b$ and $x \geq 0$. \smallskip \item {ii)} There exists a vector $y$ such that $A^Ty \geq 0$ and $b^Ty< 0$. \medskip \parenproclaim Lemma G (Farkas lemma 2.0). For an arbitrary matrix $A$ and vector $b$, exactly one of the following holds: \medskip \item {i)} There exists a vector $x$ such that $Ax \leq b$ and $x \geq 0$. \smallskip \item {ii)} There exists a vector $y$ such that $A^Ty \geq 0$, $y\geq0$ and $b^Ty< 0$. \medskip \section 1.5. Applications \vskip -\medskipamount \section 1.5.1. Matching and vertex cover in bipartite graphs Given a graph $G=(V,E)$, a {\it matching} is a set $M\subset E$ such that for all $v\in V$ there is at most one edge $e\in M$ with $v\in e$. The linear program that finds the maximal matching in a bipartite graph is $$\matrix{ & \maximise & \quad & \displaystyle\sum_{e\in E} x_e \hfill & \hcr & \subject & & \displaystyle\sum_{v\in e}x_e \leq 1 \hfill &\hbox{for all}\ v\in V,\hcr & & & x_e \geq 0 \hfill &\hbox{for all}\ e\in E.\hfill }$$ The dual of this linear program is the following: $$\matrix{ & \minimise & \quad & \displaystyle\sum_{v\in V} y_v \hfill & \hcr & \subject & & y_u + y_v \geq 1 \hfill &\hbox{for all}\ \{u,v\}\in E,\hcr & & & y_v \geq 0 \hfill &\hbox{for all}\ v\in V.\hfill }$$ Given a graph $G=(V,E)$, a {\it vertex cover} is a set $S \subset V$ such that for all $\{u, v\}\in E$, $\{u,v\}\cap S\geq 1$. The dual linear program above finds the minimum vertex cover in a bipartite graph. The duality of these two programs results in K\H{o}nig's theorem. \parenproclaim Theorem K (K\H{o}nig's theorem). Suppose $G=(V,E)$ is a bipartite graph. Let $M\subset E$ denote the maximum matching and let $S\subset V$ denote the minimum vertex cover. Then $|M|=|S|$.\slug \section 1.5.2. Matching in weighted graphs Let $G=(V,E)$ be a graph (not necessarily bipartite) with a weight function $w:E\rightarrow \R^+$. Suppose we want to find the maximum weight matching in $G$. For any subset $E'\subset E$, the {\it characteristic vector} of $E'$ is the vector $\chi^{E'}$ given by $$\chi_e^{E'} = \cases{ 1 & if $e\in E'$ \hcr 0 & if $e\notin E'$\hfill }$$ for all $e\in E$. Then the {\it matching polytope} of the graph $G$ is the set $$ M(G) = \conv\{ x \in \{0,1\}^{|E|} : x\ \hbox{is the characteristic vector of some matching in}\ G\}.$$ To get the maximum weight matching, we need only solve the linear program $$\matrix{ & \maximise & \quad & w^Tx \hcr & \subject & & x\in M(G).\hfill }$$ What does the polytope $M(G)$ look like in terms of inequalities? It turns out that $M(G)$ is the set of all vectors $x$ that satisfy \medskip \item {1.} $\displaystyle\sum_{v\in e}x_e \leq 1$ for all $v\in V$ and \smallskip \item {2.} $\displaystyle\sum_{e\in E[U]}x_e \leq {|U|-1\over 2}$ for every set $U\subset V$ of odd cardinality. \medskip In a graph $G=(V,E)$, a matching $M\subset E$ is {\it perfect} if every vertex in $V$ is incident to some edge in $M$. The perfect matching polytope is the set $$ PM(G) = \conv\{ x \in \{0,1\}^{|E|} : x\ \hbox{is the characteristic vector of some perfect matching in}\ G\}.$$ For a set $U\subset V$ of vertices, let $\delta(U)$ denote the number of edges that have one endpoint in $U$ and one endpoint outside $U$. Then we find that $PM(G)$ is the set of all vectors $x$ satisfying \medskip \item {1.} $\displaystyle\sum_{v\in e}x_e = 1$ for all $v\in V$, \smallskip \item {2.} $\displaystyle\sum_{e\in \delta(U)}x_e\geq 1$ for every set $U\subset V$ of odd cardinality, and \smallskip \item {3.} $x_e \geq 0$ for every $e\in E$. \medskip Note that the number of inequalities in these polytopes may be very large. For this reason, a polynomial-time algorithm may not exist. \section 1.5.3. The travelling salesman problem Given a graph $G=(V,E)$, a {\it closed tour} is a walk in $G$ with no repeated edges and no repeated vertices except for the first and last vertex, which are the same. In a graph with edge costs $(c_e : e\in E)$, {\it travelling salesman problem} asks us to find the minimum-weight closed tour that contains each vertex exactly once. To solve the travelling salesman problem, we start with the following linear program: $$\matrix{ & \minimise & \quad & \displaystyle\sum_{e\in E} c_ex_e \hcr & \subject & & \displaystyle\sum_{v\in e} = 2\ \hbox{for all}\ v\in V,\hcr & & & 0\leq x_e \leq 1\ \hbox{for all}\ e\in E.\hfill }$$ What we really want to do is add the inequalities $\sum_{e\in \delta(S)} x_e \geq 2$ for every $\emptyset \neq S \neq V$. This is called {\it subtour elimination} because it forbids any tours smaller than the entire closed tour we're trying to build. Adding this inequality, we get the linear program relaxation of the integer program for the travelling salesman problem. The problem is that eliminating subtours adds an exponential number of ineqalities to the program. Instead of adding all of these inequalities, we approach the problem as follows: \algbegin Algorithm T (Subtour elimination). This algorithm solves the travelling salesman problem by adding violated subtour constraints one at a time. \algstep T1. We start with the linear program without the subtour constraints. \algstep T2. Find the optimal solution to the linear program. \algstep T3. If we find that some subtour constraints are violated by the optimal solution, we add these inequalities to our program and return to step T2. If no subtour constraints are violated, we have a solution.\slug Step 3 is our main concern. How do we figure out if there are any violated subtour constraints? Suppose we are at a certain step of the algorithm and the linear program has just given us an optimal solution $x^*$. Let $G(x^*)$ denote the graph with vertices $V$ and edges $\{e : x_e^* > 0\}$. Without loss of generality, assume $G(x^*)$. Set capacities $u_e = x_e^*$ for every edge $e$. Now for any $S\subset V$, the weight of the cut $(S, V\setminus S)$ equals the value of $\sum_{e\in \delta(S)} x_e$. So we can solve the min-cut problem over $G(x^*)$ and if we get a value less than 2, we know that some subtour constraint is violated. Of course, we need some heuristics to decide which constraints to add, so there is still no guarantee of polynomial-time termination. \section 1.6. Ellipsoid algorithm So far, we do not know if linear programs can be solved in polynomial time. It turns out that optimisation is polynomially equivalent to separating over a polytope. Given $x\in \R^n$ and a polyhedron $P$, we separate as follows: Output ``yes'' if $x\in P$; otherwise, output ``no'' and a pair $(\alpha, \beta)$ such that $\alpha^Ty\leq\beta$ is a valid inequality for $P$ and $\alpha^T x > \beta$. The {\it ellipsoid algorithm} uses an oracle for separation to answer the question of feasibility: Given a polytope $P$, find an $x\in P$ or conclude that $P = \emptyset$. Before we get into the actual method, let's convince ourselves that checking for feasibility is equivalent to optimising. That optimisation implies feasibility is obvious. So let's suppose we have a method to check if a linear program is feasible. Can we use that to optimise the linear program $$\matrix{ & \minimise & \quad & c^Tx\hcr & \subject & & Ax \leq b,\hcr & & & x\geq 0?\hfill }$$ Well, from duality we know that if an optimum exists, it will also be the optimum value of the dual linear program. So we can construct a polyhedron that contains only this value and then check if it is feasible. So we check if the polyhedron $$\{\pmatrix{ x\hcr y } : c^Tx = b^Ty, Ax \leq b, A^Ty \geq c, x \geq 0, y\geq 0\}$$ is feasible and that will give us the optimum value of $x$, if it exists. On to ellipsoids. An {\it ellipsoid} $E(A,a)$ is the set $\{x \in \R^n : (x - a)^TA^{-1}(x-a) \leq 1\}$ where $A$ is a {\it positive definite} matrix. This means that $(x-a)^TA^{-1}(x-a)$ is positive and moreover, $A = M\cdot M$ for some square matrix $M$. If we let $B$ denote the unit ball in $\R^n$, then $E(A,a) = \{Mx + a : x \in B\}$. For simplicity, assume that $P$ is bounded and assume that $P$ is either empty or full-dimensional. Assume also that we have an oracle that outputs a separating hyperplane. Then the {\it ellipsoid algorithm} determines if $P = \{x : Ax \leq b\}$ is feasible. \algbegin Algorithm E (Ellipsoid algorithm). This algorithm presupposes that we have a separation oracle. Given a bounded polytope $P$ and assuming, for the sake of simplicity, that $P$ is either empty or full-dimensional, the algorithm determines either outputs a point $c$ in $P$ or concludes that $P$ is empty. \algstep E1. Start with an ellipsoid large enough that it definitely contains a point in the polytope, if one exists. \algstep E2. Let $c$ be the centre of the ellipsoid. Invoke the separation oracle. If $c\in P$, output $c$ and halt. \algstep E3. If we have reached the maximum amount of iterations, output $P = \emptyset$. \algstep E4. Otherwise, the oracle gives us a separating hyperplane that cuts the ellipsoid. Determine which half of the ellipsoid $P$ lies, then draw a new, smaller ellipsoid that contains this half. Return to step E2.\slug The maximum number of iterations is calculated using a function polynomial in the size of the linear program, so this algorithm runs in polynomial time. \section 1.7. Matroids and greedy algorithms Given a graph $G=(V,E)$, a {\it spanning tree} is a set $T\subset E$ such that every vertex is incident to some edge in $T$ and $G=(V,T)$ is a contains no cycles. Kruskal's algorithm obtains the minimum weight spanning tree by maintaining a forest $F$. If $F$ spans every vertex, then it is a spanning tree. If not, add the minimum weight edge $e$ such that $F\cup \{e\}$ does not contain a cycle and repeat. Let $G=(V,E)$ be a graph and let $F$ denote the set of all forests in $G$. Then $(E,F)$ is called the {\it graphic matroid} of $G$. In fact, there are lots of different types of matroids. A tuple $M=(E,I)$ where $I\subset {\cal P}(E)$ is a {\it matroid} if \medskip \item {i)} $\emptyset\in I$, \smallskip \item {ii)} If $X\subset Y$ and $Y\in I$, then $X\in I$, \smallskip \item {iii)} If there exist $X,Y \in I$ such that $|Y| > |X|$, then there exists some $y\in Y\setminus X$ such that $X\cup\{y\}\in I$. \medskip A system that satisfies i) and ii) is called an {\it independence system}. Associated with any independence system $M = (E,I)$ is its {\it rank function} $r : {\cal P}(E)\rightarrow \R$ defined by $$r(X) = \max\{ |Y| : Y \subset X, Y\in I\}.$$ We call $r(E)$ the {\it rank} of $M$. A set $S\in I$ such that $|S| = r(E)$ is called a {\it base} of $M$. If $M=(E,I)$ is a matroid, then the properties of its rank function $r$ are: \medskip \item {1.} $0\leq r(X) \leq |X|$ for all $x\subset E$. \smallskip \item {2.} If $X\subset Y$ then $r(X)\leq r(Y)$ for all $X, Y\subset E$. \smallskip \item {3.} $r(X\cup Y)+r(X\cap Y) \leq r(X) + r(Y)$ for all $X,Y\subset E$. \medskip Property 3 above is called {\it submodularity}. If $r$ is the rank function of a general independence system, it need only satisfy properties 1 and 2, as well as the weaker property $r(X\cup Y) \leq r(X) + r(Y)$. Matroids can be directly characterised by the rank function. \parenproclaim Theorem R (Rank characterisation of matroids). Let $E$ be a finite set and suppose that $r : {\cal P}(E)\rightarrow \R$ satisfies the three peoperties of the rank function. Then $M=(E,I)$, with $$I = \{Y \subset E : |Y| = r(Y)\},$$ is a matroid with $r$ as its rank function. To further understand the link between spanning trees and matroids, note that Kruskal's algorithm is an instance of the {\it weighted greedy algorithm}. Let $M=(E,I)$ be a matroid with a weight function $c$ defined on edges. The algorithm proceeds as follows: \algbegin Algorithm G (Greedy algorithm). Given a matroid $M=(E,I)$ with a weight function $c$ defined on edges, this algorithm finds the set $S\in I$ with maximum weight. \algstep G1. Set $S_0\gets\emptyset$, and $E$ is the element set of $M$. Set $i\gets 0$. \algstep G2. Select $e\in E$ with maximum $c(e)$. Set $E\gets E\setminus\{e\}$. \algstep G3. If $S_i\cup\{e\}\in I$, set $S_{i+1}\gets S_i\cup\{e\}$, $i\gets i + 1$. \algstep G4. If $E = \emptyset$, output $S = S_i$. Otherwise, return to step 2.\slug The following two theorems establishes a certain equivalence between matroids and problems that can be solved with the greedy algorithm. \proclaim Theorem G. Let $M=(E,I)$ be a matroid with rank function $r$. The weighted greedy algorithm finds the maximum weighted independent set $S_k$ of cardinality $k$ for each $1\leq k\leq r(E)$.\slug \parenproclaim Theorem E (Greedy equivalence). Let $M=(E,I)$ be an independence system. Then if the weighted greedy algorithm works for all (non-negative) weights, then $M$ is a matroid.\slug It is worth it to note that spanning trees and matroids have corresponding polytopes. For a graph $G=(V,E)$, the spanning tree polytope $ST(G)$ is the set of all vectors $x\in \R^{|E|}$ that satisfy \medskip \item {1.} $x_e \geq 0$, \smallskip \item {2.} $\displaystyle\sum_{e\in E} x_e = |V| - 1$, and \smallskip \item {3.} $\displaystyle\sum_{e\in E[S]} x_e \leq |S| - 1$, for every proper subset $S$ of $V$ with $|S| \geq 2$. \medskip For a matroid $M=(E,I)$ with rank function $r$, the matroid polytope $P_M$ is the set of all vectors $x\in \R^{|E|}$ satisfying $\sum_{e\in T} x_e \leq r(T)$ for all $T\subset E$, where $x_e\geq 0$. \section 2. PROBLEMS AND PROOFS \proclaim Problem 1. Describe how to find an initial basic feasible solution for an arbitrary LP in standard (equational) form. \solution Suppose the constraints are $Ax = b, x\geq 0$. Examine the matrix $A$. We want to create an identity submatrix. If one already exists, we use that as an initial basis. If not, for any row that doesn't have a unique variable with coefficient 1, introduce a new variable. So we introduce new variables $y_1,\ldots,y_k$. Then all these new variables form an initial basis of an auxiliary linear program. We minimise the sum over all $y_i$. Optimising over this minimisation linear program gives us a basis for our original linear program.\slug \proclaim Problem 2. Write an integer program formulation for the matching/perfect matching problem. \solution The integer program corresponding to the matching problem in a graph $G=(V,E)$ with a cost vector $c$ corresponding to edges. $$\matrix{ & \maximise & \quad & \displaystyle\sum_{e\in E} c_ex_e \hfill & \hcr & \subject & & \displaystyle\sum_{v\in e} x_e \leq 1 \hfill & \hbox{for all}\ v\in V,\hcr & & & x_e\in\{0,1\}.\hfill & }$$ For a perfect matching, change the inequality for each vertex to equality.\slug \proclaim Problem 3. Prove that the linear program relaxation of the standard integer program for matching/perfect matching is integral for bipartite graphs. \proof Let $LP(G,c)$ denote the linear program relaxation of the perfect matching integer program over a graph $G=(V,E)$ with weights $c$. If $LP(G,c)$ is infeasible, then $G$ has no perfect matching. For a vector $x$, let $I(x)$ denote the set $\{i:x_i\in\{0,1\}\}$. If $LP(G,c)$ is feasible, there exists at least one optimal solution. Let $x^*$ an optimal solution that maximises the number of integer elements, i.e.\ $|I(x^*)| \geq |I(x)|$ for all optimal solutions $x$. We want to show that $|I(x^*)|=|E|$. Suppose, towards a contradiction, that $|I(x^*)| < |E|$. Then there exists an $i$ such that $01$ and a $k\times k$ submatrix $Q$. Since the columns of $Q$ correspond to edges, each column of $Q$ has at most two nonzero entries (which are 1). If there is a column with only zero entries, we get $\det Q = 0$, and if there is a column with only one nonzero entry, we can expand the determinant on this column and get that up to sign, $\det Q$ equals the determinant of a $(k-1)\times(k-1)$ submatrix. The last case is if every column of $Q$ has exactly two 1s. Here we claim that $\det Q = 0$. To see this, observe that the sum of all rows of $Q$ corresponding to vertices in $X$ is the row vector $(1,\ldots,1)$, since for each column of $Q$, exactly one of its two 1s comes from a vertex in $X$. For the same reason, we get $(1,\ldots,1)$ by summing up the rows for vertices in $Y$, and it follows that the rows of $Q$ are linearly dependent.\slug \proclaim Problem 8. Prove that a polyhedron described by a totally unimodular matrix and integral right-hand side is integral. \proof Suppose that $P = \{x\in R^n : Ax\leq b\}$ for a totally unimodular matrix $A$ and vector $b$ with integer elements. Let $x$ be a vertex of $P$. Then there exists some $B\subset [n]$ such that $A_B$ is non-singular, $x_i = 0$ for all $i\notin B$, and $x_B = A_B^{-1}b$. By Cramer's rule, all the entries of $A_B^{-1}$ can be written as rational numbers with common denominator of $\det(A_B)$. But since $A$ is a non-singular unimodular matrix, $\det A_B^{-1} \in\{-1,1\}$, so every element of $A_B^{-1}$ is integral. So $x_B$ has integral elements and $x$ has integral non-zero elements, which is what we wanted.\slug \proclaim Problem 9. Prove that an independence system is a matroid if and only if the weighted greedy algorithm produces a correct result for all non-negative objective functions. \proof First we prove the ``if'' direction. Let $M=(E,I)$ be an independence system. We proceed by contraposition. We assume $M$ is not a matroid and show that there exists some non-negative objective function for which the greedy algorithm fails. So suppose that there exist $X, Y\in I$ such that $|Y|>|X|$ and for all $e\in Y\setminus X$, $X\cup \{e\}\notin I$. Let $c$ be a weight function given by $$c = \cases{ 1+\epsilon & if $e\in X$\hcr 1 & if $e\in Y\setminus X$\hcr 0 & otherwise }$$ with $\epsilon$ to be determined. The Greedy Algorithm should choose a maximum-weight independent set with cardinality $|Y|$. But instead, the algorithm chooses all of $X$, then it completes $X$ to an independent set $X'$ of cardinality $|Y|$ by choosing 0-weight elements, for a total weight of $|X|(1+\epsilon)$. Now if we set $\epsilon<{1\over |E|}$, then $c(X') < |X| + 1 \leq |Y| \leq c(Y)$. So the Greedy Algorithm chose the wrong set. Now we prove the ``only if'' direction. Let $M=(E,I)$ denote a matroid this time and we want to show that if $c$ is a positive objective function, the Greedy Algorithm finds maximum-weight independent sets of cardinality $k$ for every $k$ satisfying $1\leq k\leq r(E)$. Suppose, towards a contradiction, that the algorithm fails for a given $k$. Concretely, suppose the algorithm choose $S_k = \{s_1, s_2, \ldots, s_k\}$ with $c(s_1)\geq c(s_2)\geq \ldots \geq c(s_k)$ and there exists some set $T_k = \{t_1, t_2, \ldots, t_k\}$ with $c(t_1)\geq c(t_2)\geq \ldots \geq c(t_k)$ and $c(T_k) > c(S_k)$. This means there exists some $p$ with $1\leq p\leq k$ such that $c(t_p) > c(s_p)$. Look at the smallest such $p$. Consider $X = \{s_1, s_2, \ldots, s_{p-1}\}$ and $Y = \{t_1, t_2,\ldots, t_p\}$. Note that $|Y|>|X|$, so by property 3 of matroids, there exists a $t_j\in Y\setminus X$ such that $X\cup\{t_j\}\in I$. By observation, $c(t_j)\geq c(t_p)>c(s_p)$, so the Greedy Algorithm should have chosen $t_j$, not $s_p$, a contradiction. \slug \proclaim Problem 10. Prove that the rank function of a matroid is submodular. \proof Let $M=(E,I)$ be a matroid and let $r$ denote its rank function. We want to show that for all $X,Y\subset E$, $r(X\cup Y) + r(X\cap Y) \leq r(X)+r(Y)$. Let $J$ be a maximal independent subset of $X\cap Y$. Extend $J$ to $J_X$, a maximal independent subset of $X$, and likewise to $J_Y$, a maximal independent subset of $Y$. We have $r(X \cap Y)=|J| = |J_X\cap J_Y|$. If we can show that $r(X\cup Y)\leq |J_X\cup J_Y|$, then submodularity follows, because $|J_X\cup J_Y|+|J_X\cap J_Y|=|J_X|+|J_Y|$. Extend $J_X$ to a maximal independent subset $K$ of $X\cup Y$. Suppose, towards a contradiction, that $|K|>|J_X\cup J_Y|$. Because $J_X\setminus J$ is contained in both $K$ and $J_X\cup J_Y$, we have $|K\setminus (J_X\setminus J)|>|J_Y|$. Now, by the choice of $J_X$, we have that $K\setminus(J_X\setminus J)$ is an independent subset of $Y$. This contradicts the choice of $J_Y$.\slug \end{document}