{ "cells": [ { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "# Week 11: 2018/04/16-20" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Monday reading\n", "\n", "Read Section 7.1. The first part should be review, so you can skim it. But start reading closely at \"Complexity Relationships Among Models.\" \n", "\n", "Read Section 7.2. I think the idea of a polynomial-time algorithm should be pretty clear, so you don't have to read Theorems 7.14-16 unless they are of interest to you.\n", "\n", "Read Section 7.3. It's probably a good idea to study Theorem 7.24 and/or 7.25." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Tuesday class\n", "\n", "Up until now, we have not concerned ourselves at all with time or space efficiency, writing machines that are ridiculously inefficient. In this unit, we will finally turn our attention to the issue of time complexity. (Space complexity is an advanced topic that we will not cover.)\n", "\n", "You should already be familiar with the concepts of asymptotic time complexity and big-O notation. If you need a refresher, read Section 7.1.\n", "\n", "### Models of computation\n", "\n", "Complexity analysis always assumes an underlying model of computation. In the past, you've always analyzed your programs for the platform you're actually running on -- which is fine because differences in platforms nearly always only affect running time by constant factors, so they don't impact big-O time complexity at all. However, in this class, we're taking a much, much broader view. The same algorithm implemented in two different models might in general have different big-O time complexities.\n", "\n", "The book, in 7.1, analyzes three models: Turing machines, multitape Turing machines, and nondeterministic Turing machines. We know that a computation on any of these machines can be simulated on a Turing machine, but how much slower? That is, if a computation takes $t(n)$ time on one of these machines, how long would it take on a standard Turing machine?\n", "\n", "With multitape Turing machines, the basic idea is that accessing the current cells in a multitape TM requires 1 operation, but simulating this in a single-tape TM takes $O(t(n))$ time. This is because the multiple tapes are simulated by a single tape, so the head must visit each tape. Each tape could be up to $t(n)$ cells long, so it takes $O(t(n))$ time to visit all the tapes.\n", "\n", "With nondeterministic Turing machines, the basic idea is that there is an exponential number ($2^{O(t(n))}$) of configurations to search through. (Moreover, for each configuration, there is an additional $O(t(n))$ factor required for moving back and forth between where the previous configuration and next configuration are stored.)\n", "\n", "The theoretical model most similar to real computers is the RAM machine, which we talked about briefly a few weeks ago. In this model, each tape square holds a (arbitrarily large) natural number. You can instantly jump to any square. You can add, subtract, and compare squares. You can also treat a square as a pointer, reading its value $x$ and accessing the square at position $x$. Accessing a square with value $x$ takes $\\log_2(x)$ time, because it takes $\\log_2(x)$ bits to store $x$. A RAM program that runs in $t(n)$ time can be simulated by a multitape TM [in $O(t(n)^2)$ time](http://www.cs.utoronto.ca/~sacook/homepage/rams.pdf).\n", "\n", "We could add others. This table shows how long a computation of $t(n)$ steps under various models would take to simulate on a TM (these aren't necessarily tight upper bounds):\n", "\n", "Model | Time on TM\n", "---------|------------\n", "TM | $t(n)$\n", "multitape TM | $O(t(n)^2)$\n", "RAM | $O(t(n)^4)$ \n", "NTM | $2^{O(t(n))}$ \n", "\n", "So if an algorithm is polynomial on a multitape TM or a RAM machine, it will be slower on a TM, but still polynomial. So it seems natural to say that we care only about the distinction between polynomial algorithm and non-polynomial algorithms, in which case we don't care about the distinction between TMs, multitape TMs, RAM machines, etc. We now only care about deterministic TMs and nondeterministic TMs. This yields three complexity classes, P, NP, and beyond NP. (Remember that NP doesn't stand for non-polynomial; it stands for nondeterministic polynomial.)\n", "\n", "### Looking forward\n", "\n", "Why do we care about NTMs, if they're implausible as models of physical computation and oddballs compared to other models? Two reasons: First, it turns out that many, many problems of practical importance belong to NP. Second, some really interesting theoretical questions (some have answers and some don't yet) emerge from the study of NP.\n", "\n", "Note: This topic is also covered in Algorithms. There's going to be some overlap, and overlap is not a bad thing, but the two treatments of the topic will have different focuses. Our focus here will be on grounding NP-completeness in the computational model of NTMs, and our main result will be the Cook-Levin Theorem. The focus in Algorithms will be more on the diversity of algorithms that belong to NP-complete, and the relationships among them." ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Wednesday reading\n", "\n", "Read Section 7.4, up to but not including \"The Cook-Levin Theorem.\"" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "## Thursday class\n", "\n", "The definition of NP-completeness has two parts. First, in order to be NP-complete, a language must be in NP. Second, every NP problem must be reducible to it. We make both of these parts more precise below.\n", "\n", "### Search versus decision problems\n", "\n", "As we saw last class, a language or decision problem is in NP iff it is decidable by a NTM in polynomial time. But there are lots of interesting problems that are not decision problems. For example, finding the shortest distance in a graph from one node to another. Problems that are not decision problems cannot be in NP.\n", "\n", "At the beginning of the class, I tried to convince you that every problem could be converted into a decision problem. For example, the problem $A = {}$\"What is the shortest distance between nodes $u$ and $v$?\" can be converted into the problem $B = {}$\"Is the distance between $u$ and $v$ equal to $d$?\". To find $d$, we can try values of $d$ until we find one that is correct. That works if we're only concerned about what languages a computational model can recognize, but now we are also concerned about efficiency.\n", "\n", "So we want to find a decision problem $B$ such that if we had a polynomial-time oracle for $B$, then we could solve problem $A$ in polynomial time. For example, since $0 \\leq d \\leq |E| \\leq |V|^2$, the naive search for $d$ gives only a polynomial slowdown.\n", "\n", "What if $A = {}$\"What is the shortest path between nodes $u$ and $v$?\" Converting to the decision problem \"Is the shortest path between $u$ and $v$ equal to $\\pi$?\" doesn't work because there are exponentially many possible paths. But converting to the decision problem \"Does the shortest path between $u$ and $v$ start with prefix $\\pi$?\" gives only a polynomial slowdown. In general it's always possible to incrementally generate the answer like this.\n", "\n", "So a problem that is not a decision problem must be reduced to a decision problem, and the reduction must take only polynomial time. But going forward, we will only discuss decision problems.\n", "\n", "### Mapping reductions\n", "\n", "The second part of the definition is that for a language to be NP-complete, every NP language must be polynomial-time reducible to it. This condition is called _NP-hardness_. We used the concept of reduction already when discussing how to reduce search problems to decision problems. We might say that $A$ is polynomially-time reducible to $B$ if, given a polynomial-time oracle for $B$, we can solve $A$ in polynomial time.\n", "\n", "But Sipser wants you to use a tighter kind of reduction, a polynomial-time _mapping_ reduction (Definition 7.29), more commonly known as a _many-one_ reduction. Instead of converting a procedure for $B$ into a procedure for $A$, it converts an instance of $A$ into an instance of $B$. Or, another way of thinking about it is, you write a procedure for $A$ using an oracle for $B$, but you only get to consult the oracle once and you must return the oracle's answer (you can't negate it or do anything else with it).\n", "\n", "There are a number of advantages to using this definition of reduction. Whether it leads to a different set of NP-complete languages is actually unknown. But remember, when proving that all NP languages are polynomially reducible to a language, you must demonstrate a mapping reduction.\n", "\n", "### Side note on completeness in general\n", "\n", "There are other \"X-complete\" classes besides NP-complete. For example, a language is Turing-complete if every Turing-recognizable language can be reduced to it. Every Turing-recognizable but undecidable language we've seen in this class is Turing-complete. It was an open problem for a long time whether there was such a thing as a language that is neither Turing-complete nor decidable -- [there is](https://en.wikipedia.org/wiki/Turing_degree#Post.27s_problem_and_the_priority_method)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.4" } }, "nbformat": 4, "nbformat_minor": 0 }