{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Probabilistic structures for scalable computing\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Online mean and variance estimates\n", "\n", "The first technique we'll introduce isn't a probabilistic structure at all, but it will serve as a warm-up to introduce some of the more involved concepts we'll look at later. We'll look at Chan's formula for online mean and variance estimates, so that we can calculate estimated mean and variance in a single pass over a large data set. As we'll see, this technique will also let us combine estimates for several data sets (i.e., for processing a partitioned collection in parallel)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class StreamMV(object):\n", " from sys import float_info\n", "\n", " def __init__(self, count=0, min=float_info.max,\n", " max=-float_info.max, m1=0.0, m2=0.0):\n", " (self.count, self.min, self.max) = (count, min, max)\n", " (self.m1, self.m2) = (m1, m2)\n", "\n", " def __lshift__(self, sample):\n", " (self.max, self.min) = (max(self.max, sample), min(self.min, sample))\n", " dev = sample - self.m1\n", " self.m1 = self.m1 + (dev / (self.count + 1))\n", " self.m2 = self.m2 + (dev * dev) * self.count / (self.count + 1)\n", " self.count += 1\n", " return self\n", "\n", " def mean(self): \n", " return self.m1\n", "\n", " def variance(self): \n", " return self.m2 / self.count\n", "\n", " def stddev(self): \n", " return math.sqrt(self.variance)\n", " \n", " def merge_from(self, other):\n", " if other.count == 0:\n", " return self\n", " if self.count == 0:\n", " (self.m1, self.m2) = (other.m1, other.m2)\n", " self.count = other.count\n", " (self.min, self.max) = (other.min, other.max)\n", " return self\n", " else:\n", " dev = other.m1 - self.m1\n", " new_count = other.count + self.count\n", " self.m1 = (self.count * self.m1 + other.count * other.m1) / new_count\n", " self.m2 = self.m2 + other.m2 + (dev * dev) * self.count * other.count / new_count\n", " self.count = new_count\n", " self.max = max(self.max, other.max)\n", " self.min = min(self.min, other.min)\n", " return self" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can test this code by sampling from a random distribution with known mean and variance. (We're using the Poisson distribution with a $\\lambda$ parameter of 7, which should have a mean and variance of 7, but you could try with any other distribution if you wanted.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import poisson\n", "sink = StreamMV()\n", "\n", "for p in poisson.rvs(7, size=10000):\n", " sink << p\n", "\n", "print (sink.mean(), sink.variance())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that we can also parallelize this work:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.stats import poisson\n", "s1, s2 = StreamMV(), StreamMV()\n", "\n", "for p in poisson.rvs(7, size=10000):\n", " s1 << p\n", "\n", "\n", "for p in poisson.rvs(7, size=10000):\n", " s2 << p\n", "\n", "print(\"s1 mean %f, variance %f, count %d\" % (s1.mean(), s1.variance(), s1.count))\n", "print(\"s2 mean %f, variance %f, count %d\" % (s2.mean(), s2.variance(), s2.count))\n", "\n", "s1.merge_from(s2)\n", "\n", "print(\"s1+s2 mean %f, variance %f, count %d\" % (s1.mean(), s1.variance(), s1.count))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The mean and variance estimate technique we've just shown has a few things in common with the other techniques we'll look at:\n", "\n", "1. It's _incremental_, meaning that it is possible to update an estimate with a single sample at a time (this also implies that it's _single-pass_, meaning that you only need to see each sample once).\n", "2. It's _parallel_, meaning that it is possible to combine estimates for subsets of the population of interest and get an estimate for their union, and\n", "3. It's _scalable_, meaning that it requires a constant amount of space no matter how many samples it processes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Bloom filter" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A conventional hash table (or hash table-backed set structure) consists of a series of _buckets_. Hash table insert looks like this:\n", "\n", "1. First, use the hash value of the key to identify the index of the bucket that should contain it. \n", "2. If the bucket is empty, update the bucket to contain the key and value (with a trivial value in the case of a hashed set). \n", "3. If the bucket is not empty and the key stored in it is not the one you've hashed, handle this _hash collision_. There are several strategies to handle hash collisions precisely; most involve extra lookups (e.g., having a second hash function or going to the next available bucket) or extra space (e.g., having a linked list of keys and values in each bucket).\n", "\n", "Hash table lookup proceeds similarly: \n", "\n", "1. Looking up the index of the bucket that should contain a key (as above).\n", "2. Check to see if that bucket contains the key. \n", " - If the bucket contains the key, return the value (or \"true\" in the case of a hash-backed set).\n", " - If the bucket contains nothing, then the key is not in the table. \n", " - If the bucket contains something else, follow the strategy for resolving collisions until finding a bucket that contains the key or exhausting all possible buckets.\n", "\n", "Think of a [Bloom filter](https://en.wikipedia.org/wiki/Bloom_filter) as a hashed set structure that has no precise way to handle collisions. Instead, the Bloom filter ameliorates the impact of hash collisions by using _multiple hash functions_. The buckets in the Bloom filter are merely bits: they do not have the identities of keys. When a value is inserted into the Bloom filter, multiple hash functions are used to select which buckets should be set to true (buckets that are already true are not changed). This means that if _all_ of the buckets for a given key are true, then the Bloom filter _may_ contain it, but that if _any_ of the buckets for a given key are false, then the Bloom filter _must not_ contain it.\n", "\n", "Let's see an implementation. We'll start by building a basic bit vector class so that we can efficiently store values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy\n", "\n", "class BitVector(object):\n", " def __init__(self, size):\n", " self._size = size\n", " ct = size % 64 == 0 and (size / 64) or (size / 64 + 1)\n", " self._entries = numpy.zeros(int(ct), numpy.uint64)\n", " \n", " def __len__(self):\n", " return self._size\n", " \n", " def __getitem__(self, key):\n", " k = int(key)\n", " return (self._entries[int(k / 64)] & numpy.uint64(1 << (k % 64))) > 0\n", " \n", " def __setitem__(self, key, value):\n", " k = int(key)\n", " if value:\n", " update = numpy.uint64(1 << key % 64)\n", " self._entries[int(k / 64)] = self._entries[int(k / 64)] | update\n", " else:\n", " update = numpy.uint64(1 << key % 64)\n", " self._entries[int(k / 64)] = self._entries[int(k / 64)] ^ update\n", " \n", " def merge_from(self, other):\n", " numpy.bitwise_or(self._entries, other._entries, self._entries)\n", " \n", " def intersect_from(self, other):\n", " numpy.bitwise_and(self._entries, other._entries, self._entries)\n", " \n", " def dup(self):\n", " result = BitVector(self._size)\n", " result.merge_from(self)\n", " return result\n", " \n", " def intersect(self, other):\n", " result = BitVector(self._size)\n", " numpy.bitwise_and(self._entries, other._entries, result._entries)\n", " return result\n", " \n", " def union(self, other):\n", " result = BitVector(self._size)\n", " numpy.bitwise_or(self._entries, other._entries, result._entries)\n", " return result\n", " \n", " def count_set_bits(self):\n", " \"\"\" Count the number of bits set in this vector. \n", " There are absolutely better ways to do this\n", " but this implementation is suitable for\n", " occasional use. \"\"\"\n", " def set_bits(i):\n", " result = 0\n", " i = int(i)\n", " while i:\n", " result += (i & 1)\n", " i >>= 1\n", " return result\n", " return sum([set_bits(x) for x in self._entries])\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now implement the Bloom filter using the bit vector to store values." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class Bloom(object):\n", " def __init__(self, size, hashes):\n", " \"\"\" Initializes a Bloom filter with the\n", " given size and a collection of hashes, \n", " which are functions taking arbitrary \n", " values and returning integers. \n", " \n", " hashes can be either a function taking \n", " a value and returning a list of results\n", " or a list of functions. In the latter \n", " case, this constructor will synthesize \n", " the former \"\"\"\n", " self.__buckets = BitVector(size)\n", " self.__size = len(self.__buckets)\n", " \n", " if hasattr(hashes, '__call__'):\n", " self.__hashes = hashes\n", " else:\n", " funs = hashes[:]\n", " def h(value):\n", " return [f(value) for f in funs]\n", " self.__hashes = h\n", " \n", " def size(self):\n", " return self.__size\n", " \n", " def insert(self, value):\n", " \"\"\" Inserts a value into this set \"\"\"\n", " for h in self.__hashes(value):\n", " self.__buckets[h % self.__size] = True\n", " \n", " def lookup(self, value):\n", " \"\"\" Returns true if value may be in this set\n", " (i.e., may return false positives) \"\"\"\n", " for h in self.__hashes(value):\n", " if self.__buckets[h % self.__size] == False:\n", " return False\n", " return True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we'll need some different hash functions to use in our Bloom filter. We can simulate multiple hashes by using one of the hashes supplied in `hashlib` and simply masking out parts of the digest." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from hashlib import sha1\n", "import pickle\n", "\n", "def h_sha1(value):\n", " bvalue = type(value) == bytes and value or pickle.dumps(value)\n", " return sha1(bvalue).hexdigest()\n", "\n", "def hashes_for(value):\n", " bvalue = type(value) == bytes and value or pickle.dumps(value)\n", " digest = sha1(bvalue).hexdigest()\n", " return [int(digest[s:s+7], 16) for s in [0,8,16,24]]\n", "\n", "def h1(value):\n", " return int(h_sha1(value)[0:8], 16)\n", "\n", "def h2(value):\n", " return int(h_sha1(value)[8:16], 16)\n", "\n", "def h3(value):\n", " return int(h_sha1(value)[16:24], 16)\n", "\n", "def hashes_for(count, stride):\n", " def hashes(value):\n", " bvalue = type(value) == bytes and value or pickle.dumps(value)\n", " digest = sha1(bvalue).hexdigest()\n", " return [int(digest[s:s+stride], 16) for s in [x * stride for x in range(count)]]\n", " return hashes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's construct a Bloom filter using our three hashes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# equivalent to bloom = Bloom(1024, [h1, h2, h3])\n", "bloom = Bloom(1024, hashes_for(3, 8))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bloom.insert(\"foobar\")\n", "bloom.lookup(\"foobar\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bloom.lookup(\"absent\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So far, so good! Now let's run an experiment to see how our false positive rate changes over time. We're going to construct a random stream of values and insert them into a Bloom filter -- but we're going to look them up first. Since it is extremely improbable that we'll get the same random values twice in a short simulation (the period of the Mersenne Twister that Python uses is too large to allow this), we can be fairly certain that any values for which `lookup` returns true before we've inserted them are false positives. We'll collect the false positive rate at every 100 samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def bloom_experiment(sample_count, size, hashes, seed=0x15300625):\n", " import random\n", " from collections import namedtuple\n", " \n", " random.seed(seed)\n", " bloom = Bloom(size, hashes)\n", " \n", " result = []\n", " false_positives = 0\n", " \n", " for i in range(sample_count):\n", " bits = random.getrandbits(64)\n", " if bloom.lookup(bits):\n", " false_positives = false_positives + 1\n", " bloom.insert(bits)\n", " \n", " if i % 100 == 0:\n", " result.append((i + 1, false_positives / float(i + 1)))\n", " result.append((i + 1, false_positives / float(i + 1)))\n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pandas import DataFrame\n", "\n", "results = bloom_experiment(1 << 18, 4096, hashes_for(3, 8))\n", "df = DataFrame.from_records(results)\n", "df.rename(columns={0: \"unique values\", 1: \"false positive rate\"}, inplace=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "%config InlineBackend.figure_format = 'svg'\n", "\n", "import seaborn as sns\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "sns.set(color_codes=True)\n", "_, ax = plt.subplots(figsize=(5,5))\n", "ax.set(xscale=\"log\")\n", "\n", "_ = sns.regplot(\"unique values\", \"false positive rate\", df, ax=ax, fit_reg=False, scatter=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see how increasing the size of the filter changes our results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = bloom_experiment(1 << 18, 16384, hashes_for(3, 8))\n", "df = DataFrame.from_records(results )\n", "df.rename(columns={0: \"unique values\", 1: \"false positive rate\"}, inplace=True)\n", "_, ax = plt.subplots(figsize=(5,5))\n", "ax.set(xscale=\"log\")\n", "\n", "\n", "_ = sns.regplot(\"unique values\", \"false positive rate\", df, ax=ax, fit_reg=False, scatter=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Analytic properties\n", "\n", "We can analytically predict a false positive rate for a given Bloom filter. If $k$ is the number of hash functions, $m$ is the size of the Bloom filter in bits, and $n$ is the number of elements in the set, we can expect a false positive rate of $ ( 1 - e^{- kn / m} )^k $. Let's plot that function for our previous example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = []\n", "import math\n", "hash_count = 3\n", "filter_size = 16384\n", "\n", "entries = 0\n", "while entries < 1 << 18:\n", " results.append((entries + 1, math.pow(1 - math.pow(math.e, -((hash_count * (entries + 1)) / filter_size)), hash_count)))\n", " entries = entries + 100\n", "\n", "df = DataFrame.from_records(results)\n", "df.rename(columns={0: \"unique values\", 1: \"false positive rate\"}, inplace=True)\n", "_, ax = plt.subplots(figsize=(5,5))\n", "ax.set(xscale=\"log\")\n", "\n", "\n", "_ = sns.regplot(\"unique values\", \"false positive rate\", df, ax=ax, fit_reg=False, scatter=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see, our expected false positive rate lines up very closely to our actual false positive rate.\n", "\n", "## Other useful properties\n", "\n", "Since it is possible to incrementally update a Bloom filter by adding a single element, the Bloom filter is suitable for stream processing.\n", "\n", "However, it is also possible to find the _union_ of two Bloom filters if they have the same size and were constructed with the same hash functions, which means it is possible to use the Bloom filter for parallel batch processing (i.e., approximating a very large set by combining the Bloom filters approximating its subsets). The union of Bloom filters approximating sets $A$ and $B$ is the bucketwise OR of $A$ and $B$. The union of Bloom filters approximating sets $A$ and $B$ will produce the same result as the Bloom filter approximating the set $A \\cup B$.\n", "\n", "It is also possible to find the _intersection_ of two Bloom filters by taking their bucketwise AND. $ \\mathrm{Bloom}(A) \\cap \\mathrm{Bloom}(B) $ may be less precise than $ \\mathrm{Bloom}(A \\cap B) $; the upper bound on the false positive rate for $ \\mathrm{Bloom}(A) \\cap \\mathrm{Bloom}(B) $ will be the greater of the false positive rates for $ \\mathrm{Bloom}(A) $ and $ \\mathrm{Bloom}(B) $." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class Bloom(object):\n", " def __init__(self, size, hashes):\n", " \"\"\" Initializes a Bloom filter with the\n", " given size and a collection of hashes, \n", " which are functions taking arbitrary \n", " values and returning integers. \n", " \n", " hashes can be either a function taking \n", " a value and returning a list of results\n", " or a list of functions. In the latter \n", " case, this constructor will synthesize \n", " the former \"\"\"\n", " self.__buckets = BitVector(size)\n", " self.__size = len(self.__buckets)\n", " \n", " if hasattr(hashes, '__call__'):\n", " self.__hashes = hashes\n", " else:\n", " funs = hashes[:]\n", " def h(value):\n", " return [int(f(value)) for f in funs]\n", " self.__hashes = h\n", " \n", " def size(self):\n", " return self.__size\n", " \n", " def insert(self, value):\n", " \"\"\" Inserts a value into this set \"\"\"\n", " for h in self.__hashes(value):\n", " self.__buckets[h % self.__size] = True\n", " \n", " def lookup(self, value):\n", " \"\"\" Returns true if value may be in this set\n", " (i.e., may return false positives) \"\"\"\n", " for h in self.__hashes(value):\n", " if self.__buckets[h % self.__size] == False:\n", " return False\n", " return True\n", " \n", " def merge_from(self, other):\n", " \"\"\" Merges other in to this filter by \n", " taking the bitwise OR of this and \n", " other. Updates this filter in place. \"\"\"\n", " self.__buckets.merge_from(other.__buckets)\n", " \n", " def intersect(self, other):\n", " \"\"\" Takes the approximate intersection of \n", " this and other, returning a new filter \n", " approximating the membership of the \n", " intersection of the set approximated \n", " by self and the set approximated by other.\n", " \n", " The upper bound on the false positive rate \n", " of the resulting filter is the greater of \n", " the false positive rates of self and other \n", " (but the FPR may be worse than the FPR of \n", " a Bloom filter constructed only from the \n", " values in the intersection of the sets \n", " approximated by self and other). \"\"\"\n", " \n", " b = Bloom(self.size(), self.__hashes)\n", " b.__buckets.merge_from(self.__buckets)\n", " b.__buckets.intersect_from(other.__buckets)\n", " return b\n", " \n", " def union(self, other):\n", " \"\"\" Generates a Bloom filter approximating the \n", " membership of the union of the set approximated\n", " by self and the set approximated by other.\n", " \n", " Unlike intersect, this does not affect the \n", " precision of the filter (i.e., its precision\n", " will be identical to that of a Bloom filter \n", " built up from the union of the two sets). \"\"\"\n", " \n", " b = Bloom(self.size(), self.__hashes)\n", " b.__buckets.merge_from(self.__buckets)\n", " b.__buckets.merge_from(other.__buckets)\n", " return b\n", " \n", " \n", " def dup(self):\n", " b = Bloom(self.size(), self.__hashes)\n", " b.merge_from(self)\n", " return b" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see these in action:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b1 = Bloom(1024, hashes_for(3, 8))\n", "b2 = Bloom(1024, hashes_for(3, 8))\n", "\n", "b1.insert(\"foo\")\n", "b1.insert(\"bar\")\n", "b2.insert(\"foo\")\n", "b2.insert(\"blah\")\n", "\n", "b_intersect = b1.intersect(b2)\n", "b_intersect.lookup(\"foo\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b_intersect.lookup(\"blah\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "b_union = b1.union(b2) \n", "b_union.lookup(\"blah\"), b_union.lookup(\"bar\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Partitioned Bloom Filters\n", "\n", "The _partitioned Bloom filter_ simply divides the set of buckets into several partitions (one for each hash function) so that, e.g., a bit in partition 0 can only be set by hash 0, and so on. A major advantage of the partitioned Bloom filter is that it has a better false positive rate under intersection (see the reference to Jeffrey and Steffan below), which can be better used to identify potential conflicts between very large sets.\n", "\n", "Because we track the count of hash functions explicitly (in the count of partitions), we can also easily adapt the cardinality estimation technique of [Swamidass and Baldi](http://www.igb.uci.edu/~pfbaldi/publications/journals/2007/ci600526a.pdf)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class PartitionedBloom(object):\n", " def __init__(self, size, hashes):\n", " \"\"\" Initializes a Bloom filter with the\n", " given per-partition size and a collection \n", " of hashes, which are functions taking \n", " arbitrary values and returning integers. \n", " The partition count is the number of hashes.\n", " \n", " hashes can be either a function taking \n", " a value and returning a list of results\n", " or a list of functions. In the latter \n", " case, this constructor will synthesize \n", " the former \"\"\"\n", " if hasattr(hashes, '__call__'):\n", " self.__hashes = hashes\n", " # inspect the tuple returned by the hash function to get a depth\n", " self.__depth = len(hashes(bytes()))\n", " else:\n", " funs = hashes[:]\n", " self.__depth = len(hashes)\n", " def h(value):\n", " return [int(f(value)) for f in funs]\n", " self.__hashes = h\n", " \n", " self.__buckets = BitVector(size * self.__depth)\n", " self.__size = size\n", "\n", " \n", " def size(self):\n", " return self.__size\n", " \n", " def partitions(self):\n", " return self.__depth\n", " \n", " def insert(self, value):\n", " \"\"\" Inserts a value into this set \"\"\"\n", " for (p, row) in enumerate(self.__hashes(value)):\n", " self.__buckets[(p * self.__size) + (row % self.__size)] = True\n", " \n", " def lookup(self, value):\n", " \"\"\" Returns true if value may be in this set\n", " (i.e., may return false positives) \"\"\"\n", " for (p, row) in enumerate(self.__hashes(value)):\n", " if not self.__buckets[(p * self.__size) + (row % self.__size)]:\n", " return False\n", " return True\n", " \n", " def merge_from(self, other):\n", " \"\"\" Merges other in to this filter by \n", " taking the bitwise OR of this and \n", " other. Updates this filter in place. \"\"\"\n", " self.__buckets.merge_from(other.__buckets)\n", " \n", " def intersect(self, other):\n", " \"\"\" Takes the approximate intersection of \n", " this and other, returning a new filter \n", " approximating the membership of the \n", " intersection of the set approximated \n", " by self and the set approximated by other.\n", " \n", " The upper bound on the false positive rate \n", " of the resulting filter is the greater of \n", " the false positive rates of self and other \n", " (but the FPR may be worse than the FPR of \n", " a Bloom filter constructed only from the \n", " values in the intersection of the sets \n", " approximated by self and other). \"\"\"\n", " \n", " b = PartitionedBloom(self.size(), self.__hashes)\n", " b.__buckets.merge_from(self.__buckets)\n", " b.__buckets.intersect_from(other.__buckets)\n", " return b\n", " \n", " def union(self, other):\n", " \"\"\" Generates a Bloom filter approximating the \n", " membership of the union of the set approximated\n", " by self and the set approximated by other.\n", " \n", " Unlike intersect, this does not affect the \n", " precision of the filter (i.e., its precision\n", " will be identical to that of a Bloom filter \n", " built up from the union of the two sets). \"\"\"\n", " \n", " b = PartitionedBloom(self.size(), self.__hashes)\n", " b.__buckets.merge_from(self.__buckets)\n", " b.__buckets.merge_from(other.__buckets)\n", " return b\n", " \n", " \n", " def dup(self):\n", " b = PartitionedBloom(self.size(), self.__hashes)\n", " b.merge_from(self)\n", " return b\n", " \n", " def approx_cardinality(self):\n", " \"\"\" Returns an estimate of the cardinality of\n", " the set modeled by this filter. Uses\n", " a technique due to Swamidass and Baldi. \"\"\"\n", " from math import log\n", " m, k = self.size() * self.partitions(), self.partitions()\n", " X = self.__buckets.count_set_bits()\n", " print(m, k, X)\n", " return -(m / k) * log(1 - (X / m))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def pbloom_experiment(sample_count, size, hashes, mod1=3, mod2=7, seed=0x15300625):\n", " import random\n", " from collections import namedtuple\n", " \n", " random.seed(seed)\n", " pb1 = PartitionedBloom(size, hashes)\n", " pb2 = PartitionedBloom(size, hashes)\n", " \n", " b1 = Bloom(pb1.size() * pb1.partitions(), hashes)\n", " b2 = Bloom(pb1.size() * pb1.partitions(), hashes)\n", " result = []\n", " pb_fp, b_fp = 0, 0\n", " \n", " count = 0\n", " \n", " for i in range(sample_count):\n", " bits = random.getrandbits(64)\n", " if i % mod1 == 0:\n", " pb1.insert(bits)\n", " b1.insert(bits)\n", " if i % mod2 == 0:\n", " pb2.insert(bits)\n", " b2.insert(bits)\n", " if i % mod1 == 0:\n", " count += 1\n", " \n", " pb = pb1.intersect(pb2)\n", " b = b1.intersect(b2)\n", " \n", " random.seed(seed)\n", " \n", " for i in range(sample_count):\n", " bits = random.getrandbits(64)\n", " if pb.lookup(bits) and ((i % mod1 != 0) or (i % mod2 != 0)):\n", " pb_fp += 1\n", " if b.lookup(bits) and ((i % mod1 != 0) or (i % mod2 != 0)):\n", " b_fp += 1\n", " return (count, b_fp, pb_fp)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = []\n", "\n", "for pwr in range(10, 17):\n", " for count in [1 << pwr, (1 << pwr) + (1 << (pwr - 1))]:\n", " tp, bfp, pbfp = pbloom_experiment(count, 16384, hashes_for(8, 4))\n", " results.append((\"Bloom\", count, bfp / (float(tp) + bfp)))\n", " results.append((\"partitioned Bloom\", count, pbfp / (float(tp) + pbfp)))\n", " \n", "df = DataFrame.from_records(results )\n", "df.rename(columns={0: \"kind\", 1: \"unique values\", 2: \"FPR\"}, inplace=True)\n", "\n", "ax = sns.pointplot(\"unique values\", \"FPR\", hue=\"kind\", ci=None, data=df, scatter=True)\n", "_ = ax.set(ylabel=\"FPR\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Applications\n", "\n", "* The application Bloom used as a case study in [his paper introducing the structure](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.2080) was a hyphenation program, in which roughly 90% of words could be hyphenated by simple rules but 10% required a dictionary lookup -- and the dictionary was too large to hold in core. By using a small Bloom filter to record the words that required dictionary lookup, it would be possible possible to dramatically reduce disk accesses without impacting the correctness of the application.\n", "* [Bloom join](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.134.5196) is a classic technique to optimize joins in distributed databases. The result of a join is, logically, the subset of the Cartesian product of two relations that satisfies some predicate (typically an equality relation on a field). There are many strategies for implementing joins in conventional databases, but these may prove prohibitive in a distributed database, where different relations may reside on different machines. Bloom join optimizes these joins by allowing local filtering of the relations involved. For example, consider the SQL statement `SELECT * FROM A, B WHERE A.x = B.x`: by broadcasting Bloom filters of the sets of values for `x` in both `A` and `B`, it is possible to filter out many tuples that would never appe\n", "* Bloom filters are often implemented in hardware, since a range of microarchitectural features can benefit from fast approximate set membership queries. For one example application, see [Jeffrey and Steffan](http://www.eecg.toronto.edu/~steffan/papers/jeffrey_spaa11.pdf), in which the motivating example involves using Bloom filters to show that two hardware transactions do not interfere before allowing them to commit. (This technique is not their innovation; rather, the focus of Jeffrey and Steffan's work is to show that _partitioned Bloom filters_ admit a smaller false positive rate for the intersection of Bloom filters, and thus set disjointedness.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Count-min sketch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class CMS(object):\n", " def __init__(self, width, hashes):\n", " \"\"\" Initializes a Count-min sketch with the\n", " given width and a collection of hashes, \n", " which are functions taking arbitrary \n", " values and returning integers. The depth\n", " of the sketch structure is taken from the\n", " number of supplied hash functions.\n", " \n", " hashes can be either a function taking \n", " a value and returning a list of results\n", " or a list of functions. In the latter \n", " case, this constructor will synthesize \n", " the former \"\"\"\n", " self.__width = width\n", " \n", " if hasattr(hashes, '__call__'):\n", " self.__hashes = hashes\n", " # inspect the tuple returned by the hash function to get a depth\n", " self.__depth = len(hashes(bytes()))\n", " else:\n", " funs = hashes[:]\n", " self.__depth = len(hashes)\n", " def h(value):\n", " return [int(f(value)) for f in funs]\n", " self.__hashes = h\n", " \n", " self.__buckets = numpy.zeros((int(width), int(self.__depth)), numpy.uint64)\n", " \n", " \n", " def width(self):\n", " return self.__width\n", " \n", " def depth(self):\n", " return self.__depth\n", " \n", " def insert(self, value):\n", " \"\"\" Inserts a value into this sketch \"\"\"\n", " for (row, col) in enumerate(self.__hashes(value)):\n", " self.__buckets[col % self.__width][row] += 1\n", " \n", " def lookup(self, value):\n", " \"\"\" Returns a biased estimate of number of times value has been inserted in this sketch\"\"\"\n", " return min([self.__buckets[col % self.__width][row] for (row, col) in enumerate(self.__hashes(value))])\n", " \n", " def merge_from(self, other):\n", " \"\"\" Merges other in to this sketch by \n", " adding the counts from each bucket in other\n", " to the corresponding buckets in this\n", " \n", " Updates this. \"\"\"\n", " self.__buckets += other.__buckets\n", " \n", " def merge(self, other):\n", " \"\"\" Creates a new sketch by merging this sketch's\n", " counts with those of another sketch. \"\"\"\n", " \n", " cms = CMS(self.width(), self.__hashes)\n", " cms.__buckets += self.__buckets\n", " cms.__buckets += other.__buckets\n", " return cms\n", " \n", " def inner(self, other):\n", " \"\"\" returns the inner product of self and other, estimating \n", " the equijoin size between the streams modeled by \n", " self and other \"\"\"\n", " r, = numpy.tensordot(self.__buckets, other.__buckets).flat\n", " return r\n", " \n", " def minimum(self, other):\n", " \"\"\" Creates a new sketch by taking the elementwise minimum \n", " of this sketch and another. \"\"\"\n", " cms = CMS(self.width(), self.__hashes)\n", " cms.__buckets = numpy.minimum(self.__buckets, other.__buckets)\n", " return cms\n", "\n", " def dup(self):\n", " cms = CMS(self.width(), self.__hashes)\n", " cms.merge_from(self)\n", " return cms" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cms = CMS(16384, hashes_for(3,8))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cms.lookup(\"foo\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "cms.insert(\"foo\")\n", "cms.lookup(\"foo\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While hash collisions in Bloom filters lead to false positives, hash collisions in count-min sketches lead to overestimating counts. To see how much this will affect us in practice, we can design an empirical experiment to plot the cumulative distribution of the factors that we've overestimated counts by in sketches of various sizes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def cms_experiment(sample_count, size, hashes, seed=0x15300625):\n", " import random\n", " from collections import namedtuple\n", " \n", " random.seed(seed)\n", " cms = CMS(size, hashes)\n", " \n", " result = []\n", " total_count = 0\n", " \n", " # update the counts\n", " for i in range(sample_count):\n", " bits = random.getrandbits(64)\n", " if i % 100 == 0:\n", " # every hundredth entry is a heavy hitter\n", " insert_count = (bits % 512) + 1\n", " else:\n", " insert_count = (bits % 8) + 1\n", " \n", " for i in range(insert_count):\n", " cms.insert(bits)\n", " \n", " random.seed(seed)\n", " # look up the bit sequences again\n", " for i in range(sample_count):\n", " bits = random.getrandbits(64)\n", " if i % 100 == 0:\n", " # every hundredth entry is a heavy hitter\n", " expected_count = (bits % 512) + 1\n", " else:\n", " expected_count = (bits % 8) + 1\n", "\n", " result.append((int(cms.lookup(bits)), int(expected_count)))\n", " \n", " return result" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = cms_experiment(1 << 14, 4096, hashes_for(3, 8))\n", "df = DataFrame.from_records(results)\n", "df.rename(columns={0: \"actual count\", 1: \"expected count\"}, inplace=True)\n", "sns.distplot(df[\"actual count\"] / df[\"expected count\"], hist_kws=dict(cumulative=True), kde_kws=dict(cumulative=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, about 55% of our counts for this small sketch are overestimated by less than a factor of three, although the worst overestimates are quite large indeed. Let's try with a larger sketch structure." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = cms_experiment(1 << 14, 8192, hashes_for(3, 8))\n", "df = DataFrame.from_records(results)\n", "df.rename(columns={0: \"actual count\", 1: \"expected count\"}, inplace=True)\n", "\n", "sns.distplot(df[\"actual count\"] / df[\"expected count\"], hist_kws=dict(cumulative=True), kde_kws=dict(cumulative=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With a larger filter size (columns) *and* more hash functions (rows), we can dramatically reduce the bias." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = cms_experiment(1 << 14, 8192, hashes_for(8, 5))\n", "df = DataFrame.from_records(results)\n", "df.rename(columns={0: \"actual count\", 1: \"expected count\"}, inplace=True)\n", "\n", "sns.distplot(df[\"actual count\"] / df[\"expected count\"], hist_kws=dict(cumulative=True), kde_kws=dict(cumulative=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "\n", "Here are some exercises to try out if you're interested in extending the count-min sketch:\n", "\n", "* The count-min sketch is a biased estimator. Implement a technique to adjust the estimates for expected bias.\n", "* When paired with an auxiliary structure like a priority queue, the count-min sketch can be used to track the top-_k_ event types in a stream. Try implementing a couple of approaches!\n", "* Consider how you'd handle negative inserts. How would you need to change the query code? What else might change?\n", "* The implementation includes a `minimum` method. What might it be useful for? What limitations might it have?\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "# HyperLogLog\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "HyperLogLog is the trickiest of these three techniques, so let's start with some intuitions.\n", "\n", "If we have a source from which we can sample uniformly-distributed _n_-bit integers, we can also see it as a source for drawing _n_ coin flips -- each bit in an integer sampled from the population of uniformly-distributed _n_-bit integers is independent of the others and is equally likely to be true or false.\n", "\n", "Because each bit is independent and equally likely to be true or false, runs of consecutive bits with the same value become increasingly unlikely with length. The probability of seeing _n_ consecutive zeros, for example, is $1$ in $2^n$. Similarly, if the largest number of leading zeros we've seen in a stream of random numbers is _n_, we can estimate that we've seen $2^n$ numbers.\n", "\n", "To see this in action, let's sample some random numbers and plot the distribution of leading-zero counts. We'll start with a function to count leading zeros:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def leading_zeros(bs):\n", " \"\"\" Return the index of the leftmost one in an \n", " integer represented as an array of bytes \"\"\"\n", " first = 0\n", " for b in bs:\n", " if b == 0:\n", " first += 8\n", " else:\n", " for bit in range(7, -1, -1):\n", " if ((1 << bit) & b) > 0:\n", " return first\n", " else:\n", " first += 1\n", " return first" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll then generate some 32-bit random integers and plot the distribution of leading-zero counts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def lz_experiment(ct):\n", " from numpy.random import randint as ri\n", " result = []\n", " for _ in range(ct):\n", " result.append(leading_zeros(bytes([ri(255), ri(255), ri(255), ri(255)])))\n", "\n", " return result\n", "\n", "lz = lz_experiment(4096)\n", "\n", "sns.distplot(lz, hist_kws=dict(cumulative=True), kde_kws=dict(cumulative=True))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can see from inspecting the cumulative distribution plot, about 50% of the samples have no leading zeros, about 75% have one or fewer leading zeros, about 87.5% of samples have two or fewer leading zeros, and so on." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from hashlib import sha1\n", "import pickle\n", "\n", "def h64(v):\n", " bvalue = type(v) == bytes and v or pickle.dumps(v)\n", " return int.from_bytes(sha1(bvalue).digest()[:8], 'little')\n", "\n", "def get_alpha(p):\n", " return {\n", " 4: 0.673,\n", " 5: 0.697,\n", " 6: 0.709,\n", " }.get(p, 0.7213 / (1.0 + 1.079 / (1 << p)))\n", "\n", "def first_set_bit(i, isize):\n", " return isize - i.bit_length() + 1\n", "\n", "class HLL(object):\n", " import numpy as np\n", " def __init__(self, p=4):\n", " self.p = min(max(p, 4), 12)\n", " self.m = int(2 ** self.p)\n", " self.alpha = get_alpha(self.p)\n", " self._registers = np.zeros(self.m, np.uint8)\n", " self._zeros = self.m\n", " \n", " def add(self, v):\n", " h = h64(v)\n", " idx = h & (self.m - 1)\n", " h >>= self.p\n", " fsb = first_set_bit(h, 64 - self.p)\n", " if self._zeros > 0 and self._registers[idx] == 0 and fsb > 0:\n", " self._zeros -= 1\n", " self._registers[idx] = max(self._registers[idx], fsb)\n", " \n", " def approx_count(self):\n", " from math import log\n", " from scipy.stats import hmean\n", " \n", " if self._zeros > 0:\n", " # if we have empty registers (and thus probably a small set),\n", " # use a different approximation that will be more precise\n", " return self.m * math.log(float(self.m) / self._zeros)\n", " else:\n", " # return the harmonic mean of 2 to the power of every register, \n", " # scaled by the number of registers\n", " return self.alpha * self.m * hmean(np.power(2.0, self._registers))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "hll = HLL()\n", "\n", "import random\n", "\n", "for i in range(20000):\n", " hll.add(random.getrandbits(64).to_bytes(8, \"big\"))\n", "\n", "hll.approx_count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Like Bloom filters and count-min sketches, HyperLogLog estimates can also be added together so that you can summarize large data sets in parallel. To combine two HyperLogLog estimates with the same number of registers, simply take the maximum of each pair of registers with the same index. (As an easy exercise, implement this above and convince yourself that it works the same as using a single estimate for a large stream.)\n", "\n", "If you're interested in learning more about HyperLogLog, a great place to start is [\"HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm\"](https://research.google.com/pubs/pub40671.html). As an exercise, try implementing some of their techniques to improve the performance of the code above!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Minhash\n", "\n", "To calculate the similarity of two sets, we can use the [Jaccard index](https://en.wikipedia.org/wiki/Jaccard_index), which divides the size of the sets' intersection by the size of their union. As with the other problems we've discussed so far, keeping explicit representations of sets around is intractable for very large sets, but it is also intractable if we have very many sets, for example, if we're building a search engine. We would like a way to construct _signatures_ of sets in such a way that we can calculate their approximate similarity.\n", "\n", "Minhash is a technique for constructing signatures of sets that will allow us to estimate their approximate similarity. Here's the basic technique, which tracks document signatures by keeping track of the _minimum_ value seen for multiple hash functions across every element in the set." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.utils.murmurhash import murmurhash3_bytes_u32 as mhash\n", "\n", "def murmurmaker(seed):\n", " \"\"\" \n", " return a function to calculate a 32-bit murmurhash of v \n", " (an object or bytes), using the given seed\n", " \"\"\"\n", " def m(v):\n", " bvalue = type(v) == bytes and v or pickle.dumps(v)\n", " return mhash(bvalue, seed=seed)\n", " \n", " return m\n", "\n", "class SimpleMinhash(object):\n", " \"\"\" This is a very basic implementation of minhash \"\"\"\n", " def __init__(self, hashes):\n", " rng = numpy.random.RandomState(seed=int.from_bytes(b\"rad!\", \"big\"))\n", " self.buckets = numpy.full(hashes, (1 << 32) - 1)\n", " self.hashes = [murmurmaker(seed) for seed in rng.randint(0, (1<<32) - 1, hashes)]\n", " \n", " def add(self, obj):\n", " self.buckets = numpy.minimum(self.buckets, [h(obj) for h in self.hashes])\n", " \n", " def similarity(self, other):\n", " \"\"\" \"\"\"\n", " return numpy.count_nonzero(self.buckets==other.buckets) / float(len(self.buckets))\n", " \n", " def merge(self, other):\n", " \"\"\" returns a newly-allocated minhash structure containing \n", " the merge of this hash and another \"\"\"\n", " result = SimpleMinhash(0)\n", " result.buckets = numpy.minimum(self.buckets, other.buckets)\n", " result.hashes = self.hashes\n", " return result" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can test a small Minhash with random values to see how well the approximate Jaccard index implementation works." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_minhash(count=50000, expected_percentage=.20):\n", " m1 = SimpleMinhash(1024)\n", " m2 = SimpleMinhash(1024)\n", " for i in range(count):\n", " bits = random.getrandbits(64).to_bytes(8, \"big\")\n", " if i % 1000 < (1000 * expected_percentage):\n", " m1.add(bits)\n", " m2.add(bits)\n", " elif i % 2 == 0:\n", " m1.add(bits)\n", " else:\n", " m2.add(bits)\n", " return m1.similarity(m2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "test_minhash()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A very common application for these kinds of document signatures is identifying similar documents based on the words that they contain -- this is useful, e.g., for detecting plagiarized prose or grouping similar web pages or news articles together. Unfortunately, even having an efficient way to calculate pairwise similarities is insufficient for this application: it doesn't matter how cheap it is to do a pairwise comparison if we have to compare every pair in a large document collection! We can use _locality-sensitive hashing_ to quickly identify similar documents without explicit pairwise comparisons. The basic idea is that we'll return a set of keys, each corresponding to the hash of a subset of the signature." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class LSHMinhash(object):\n", " \"\"\" This is a very basic implementation of minhash with locality-sensitive hashing \"\"\"\n", " def __init__(self, rows, bands):\n", " rng = numpy.random.RandomState(seed=int.from_bytes(b\"rad!\", \"big\"))\n", " hashes = rows * bands\n", " self.rows = rows\n", " self.bands = bands\n", " self.buckets = numpy.full(hashes, (1 << 32) - 1)\n", " self.hashes = [murmurmaker(seed) for seed in rng.randint(0, (1<<32) - 1, hashes)]\n", " \n", " def add(self, obj):\n", " self.buckets = numpy.minimum(self.buckets, [h(obj) for h in self.hashes])\n", " \n", " def similarity(self, other):\n", " \"\"\" \"\"\"\n", " return numpy.count_nonzero(self.buckets==other.buckets) / float(len(self.buckets))\n", " \n", " def merge(self, other):\n", " \"\"\" returns a newly-allocated minhash structure containing \n", " the merge of this hash and another \"\"\"\n", " result = SimpleMinhash(0)\n", " result.buckets = numpy.minimum(self.buckets, other.buckets)\n", " result.hashes = self.hashes\n", " return result\n", " \n", " def lsh_keys(self):\n", " return [self.hashes[0]([b for b in band]) for band in self.buckets.copy().reshape((self.rows, self.bands))]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def test_lsh_minhash(count=50000, expected_percentage=.20):\n", " m1 = LSHMinhash(64, 16)\n", " m2 = LSHMinhash(64, 16)\n", " for i in range(count):\n", " bits = random.getrandbits(64).to_bytes(8, \"big\")\n", " if i % 1000 < (1000 * expected_percentage):\n", " m1.add(bits)\n", " m2.add(bits)\n", " elif i % 2 == 0:\n", " m1.add(bits)\n", " else:\n", " m2.add(bits)\n", " return (m1.similarity(m2), m1.lsh_keys(), m2.lsh_keys())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tup = test_lsh_minhash(expected_percentage=.95)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can then group cells by keys (or even by parts of their keys) to identify candidate matches, which lets us only check a subset of all potential matches for similarity:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for t in zip(tup[1], tup[2]):\n", " if t[0] == t[1]:\n", " print(t)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To learn more about Minhash, locality-sensitive hashing, and similar techniques, see [Chapter 3](http://infolab.stanford.edu/~ullman/mmds/ch3.pdf) of [_Mining of Massive Datasets_](http://infolab.stanford.edu/~ullman/mmds/book.pdf) by Leskovec, Rajaraman, and Ullman." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.6", "language": "python", "name": "jupyter" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }