{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "# HyperLogLog in Practice\n", "\n", "- Mike Mull\n", "- @kwikstep\n", "- [https://github.com/mikemull/Notebooks/blob/master/hyperloglog.ipynb](https://github.com/mikemull/Notebooks/blob/master/hyperloglog.ipynb)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## What\n", "\n", "> \"The purpose of this note is to present and analyse an efficient algorithm for __estimating__ the number of distinct elements, known as the __cardinality__, of large data ensembles, which are referred to here as multisets and are usually __massive streams__ (read-once sequences).\"\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The quote is from Flajolet's original HLL paper. I've highlighted the key words that describe what HLL does" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### An Easy Problem For Moderately Sized, Relatively Static Data\n", "\n", "`select count(distinct message) from server_log;`\n", "\n", "`cut -f2 server.log | sort | uniq | wc -l`\n", "\n", "- Complexity is more about space than computation\n", "- Space complexity of exact methods is generally O(n) or O(nlogn)\n", "- Estimation approaches focused on reducing memory usage while retaining accuracy" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Determining the cardinality of sets of data is a very common operation, and fairly simple until the cardinalities get larger (in fact i've used this recently as an interview problem). Most methods are not _computationally_ expensive (basically O(n) since the scan the list); but are at best linear in the space used. The latter is the motivation behind most estimation algorithms.\n", "\n", "Note also that cardinality estimation is not easily done with some other more basic probabilistic methods. For example, _sampling_ doesn't work well because many unique items might occur few times." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Why\n", "\n", "- Query optimization\n", "- Monitoring of network traffic\n", "- Data sketching\n", "- Basic data analysis\n", " - From the paper: _On an average day, Powerdrill performs about 5 million such count distinct computations_\n", " - _...about 100 computations a day yeild a result great than 1 billion_\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The earliest uses of cardinality estimation were probably for database query optimization. Given information about the size of various tables, the query planner can choose the best way to join them. These algorithms are often built into network equipment also to detect usage anomalies. Hyperloglog is also used for _data sketching_ along with things like CountMin and Bloom Filters (i first heard the term in relation to Trifacta's big-data management tools). Finally, the number of unique items is something that people are interested in know for numerous data analysis reasons, like monitoring usage or doing A/B testing." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## The History\n", "Probabilistic Counting -> LogLog -> SuperLogLog -> HyperLogLog -> HyperLogLog++ -> ?\n", "\n", "- Probabilistic Counting (1985)\n", "- LogLog/SuperLogLog (2003)\n", "- HyperLogLog (2007)\n", "- HyperLogLog++ (2013)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Probabilistic Counting\n", "\n", "- Determine an \"observable\" with some understood probability that will help us estimate the cardinality\n", " - Here, a _bit pattern_ of the hashed value of the thing we're counting\n", "- Calculate observable for every item in set\n", "- Infer cardinality from observed values" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Hashing\n", "\n", "$$\n", "hash(x) -> [0...2^{L-1}]\n", "$$\n", "\n", "- We assume (with some justification) that the hash function generates values _uniformly_, so, if L were 8\n", "\n", "$$\n", "\\begin{equation}\n", " \\begin{aligned}\n", "P(00000001) &= 1/256 \\\\\n", "P(0000001x) &= 2/256 \\\\\n", "P(000001xx) &= 4/256 \\\\\n", " \\end{aligned}\n", "\\end{equation}\n", "$$\n", "\n", "- So, the more unique items we see, the more likely we are to see a bit pattern with more leading zeros" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### The Observable Value\n", "\n", "So our _observable_ is:\n", "\n", "For PC\n", "$$\n", "\\rho(hash(x)) = \\text{position of least significant 1 bit in hash}\n", "$$\n", "\n", "For LogLog and HyperLogLog, this changes to:\n", "$$\n", "\\rho(hash(x)) = \\text{position of first 1 bit}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The PC method uses a bitmap where a bit is set if the rho function indicates that bit. Once all the items have been evaluated the estimator used for log2n is the position of the rightmost zero.\n", "\n", "In the case of loglog and hyperloglog the estimatator is just the max() of rho." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Stochastic Averaging\n", "\n", "- Problem: The variance of the estimator in PC is too high\n", "- Solution 0: Run the process a bunch of times.\n", "- Solution 1: Use more hash functions\n", " - More computationally expensive\n", " - Can't construct independent hash functions" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Stochastic Averaging\n", "\n", "- Problem: The variance of the estimator in PC is too high\n", "- Solution 2: Divide stream into M substreams, average the estimates in each substream to get value for n/M\n", " - Uses the first (or last) _p_ bits of the _hashed_ value to give 2^p streams\n", "\n", "```\n", "[00000001|001001001010001001001010|\n", "[00000110|011010110010011010110010|\n", " |<-----useful bits------>|\n", "```\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### PCSA\n", "- m = 256 gives only about 5% accuracy\n", "- PCSA is O(mlog2n) on space with an accuracy of α / sqrt(m)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Probabilistic Counting with Stochastic Average (PCSA) didn't give great results, but the memory requirement was primarily dependent on the number of streams used." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## A Word About Mysterious Constants\n", "\n", "$$\n", "E(R) \\approx log_2\\phi n \\quad \\phi = 0.77351\n", "$$\n", "\n", "Or,\n", "\n", "$$\n", "\\text{standard error} = \\frac{0.78}{\\sqrt{m}}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Flajolet's real racket was the analysis of algorithms, and if you read his papers they consist mostly of long and detailed proofs about the precise nature of the distribution of the estimators that are used in these algorithms. Sadly, those details are beyond the scope of this talk, but this constant, and later normalizing factors that show up in loglog and hyperloglog come from those analyses. Note that these factors come from equations that are dependent on _m_ so they apply to a specific number of streams." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## LogLog\n", "\n", "- Key difference is that now they only store max value of rho for each stream\n", "- So, for 32-bit hashes we need at most 5 bits to track rho\n", "- In general, for 2^k length hashed we need k bits to hold max(rho)\n", "- Like PCSA, uses arithmetic mean of values in substream\n", "- Space complexity is O(log2log2), hence the name\n", "\n", "$$\n", "E := \\alpha_m m 2^{\\frac{1}{m} \\sum M(j)}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Many of the details of PCSA are retained in loglog. The major difference is that the complicated bitmap approach in PCSA has been replaced by an estimator that is simply the maximum value for rho for each stream. Since rho is the bit position, you only need k bits to track 2^k bit positions." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## HyperLogLog\n", "\n", "### The Harmonic Mean\n", "\n", "$$\n", "\\frac{m}{\\sum_{j=1}^{m} \\frac{1}{2^{M(j)}}}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The major innovation of hyperloglog over loglog is the use of the _harmonic mean_ to calculate the average across the substreams. Because the dispersion of hyperloglog is lower (1.05/sqrt(m)), it can achieve the same accuracy as loglog with less memory (because it can use fewer substreams to get the same accuracy)." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## HyperLogLog\n", "\n", "$$\n", "E := \\frac{\\alpha_m m^2}{\\sum_{j=1}^{m} \\frac{1}{2^{M(j)}}}\n", "$$" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "$$\n", "\\alpha_m := \\frac{1}{m \\int_{0}^{\\infty} (log_2(\\frac{2 + u}{1 + u}))^m du} \n", "$$\n", "\n", "\n", "- HLL also makes adjustments for high and low cardinalities" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The estimator for hyperloglog is relatively straightforward, except of course for the hairy-looking integral that's described as a normalizing factor. " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "## Finally, HyperLogLog++\n", "\n", "- Accuracy\n", "- Memory Efficiency\n", "- Estimate Large Cardinalities\n", "- Practicality\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Google's HLL++ doesn't introduce any radical innovations to the algorithm, but rather adds a lot of engineering to make it more accurate and memory efficient. These are the four goals they state as the objectives of their improvements" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Improvement 1: 64-bit Hash Function\n", "\n", "- Can handle much larger cardinalities\n", "- For size L hashes, requires log2(L + 1 -p) * 2^p bits\n", " - So the extra storage isn't that much\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "This one seems fairly obvious. Clearly they're going to be able to handle much larger cardinalities without worrying about collisions, and the extra memory required amounts to 1 bit per stream." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Improvement 2: Estimating Small Cardinalities\n", "#### Empirical Bias Correction\n", " - Calculate an average of raw estimates for each cardinality\n", " - When estimating cardinalities, use 6 closest interpolation points to correct bias\n", " \n", " ![alt text](hllbias.png \"Title\")" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "The HLL estimator has significant bias for lower cardinalities. Flajolet, et. al. addressed this by using a slightly different estimator if the estimate < 5/2m and there are empty buckets. In HLL++ they address this with an empirical bias correction. They calculate numerous estimates with the basic HLL approach, average those, and subtract from the true cardinality to get the bias. So now they have a bias estimate for a range of known cardinalities. However, when calculating a cardinality estimate for a new set of data, you obviously don't know the true cardinality, so to correct for bias they maintain a set of 200 interpolation points and do a nearest neighbor interpolation." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Improvement 2: Estimating Small Cardinalities\n", "#### Deciding Which Algorithm To Use\n", " ![Error in different algorithms](hllerror.png \"Title\")" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "Again, this is empirically determined. For cardinalities lower than about 11k (for precision 14), they use linear counting because it has lower standard error. For a middle range they use their bias-corrected version of HLL, and beyond that standard HLL." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Improvement 3: Sparse Representation\n", " - If we use 6m bits for every case, we're wasting memory when n << m\n", " - They're using 2^14 = 16384 streams\n", " - Stream index and count encoded in an integer\n", " - Combination of sorted list and auxiliary set that gets merged.\n", " - If they don't need to convert to the non-sparse representation, they can use more streams (higher precision)\n", " - Convert to the dense representation if necessary." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "So these Google guys take the memory efficiency very seriously. Their first improvement is to use a sparse representation for the stream registers so they don't have to wastefully allocate 6m bits. They encode the stream index and the max(rho()) value into an integer, and keep these in a sorted list. To make insertions faster they also have an auxiliary set, which gets merged into the list if it reaches a certain size.\n", "\n", "One really clever aspect of this representation is that they can use more streams as long as they don't have to convert to the dense representation." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Improvement 3a: Compressing and Encoding Sparse Representation\n", "\n", "- Compression:\n", " - use a variable number of bits to store (index, rho) based on current estimate\n", " - use difference encoding since the sparse list is sorted\n", "- Encoding:\n", " - Don't store rho() at all in the sparse representation" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "notes" } }, "source": [ "They make the observation that when the sparse representation is being used, the rho value won't be needed because the algorithm will use the linear counting method in the end." ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ " ### The Final Results\n", " ![HLL++ Comparison](hllfig8.png \"Title\")" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "### References\n", "\n", "- [HyperLogLog in Practice](https://research.google.com/pubs/pub40671.html)\n", "- [Damn Cool Algorithms](http://blog.notdot.net/2012/09/Dam-Cool-Algorithms-Cardinality-Estimation)\n", "- [Counting Uniques Faster in BigQuery](https://cloud.google.com/blog/big-data/2017/07/counting-uniques-faster-in-bigquery-with-hyperloglog)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 0 }