{ "metadata": { "name": "GithubUsers" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "##### Example: Number of Github users\n", "\n", "\n", "This is a fun example. Suppose we wish to predict how many sign-ups there are on Github.com. Officially, Github does not release an up-to-date count, and at last offical annoucment (January 2013) the count was 3 million. What if we wish to measure it today? We *could* [extrapolate future numbers from previous annoucements](http://redmonk.com/dberkholz/2013/01/21/github-will-hit-5-million-users-within-a-year/), but this uses little data and we could potentially be off by hundreds of thousands, and you are essentially just curve fitting complicated models. \n", "\n", "Instead, what we are going to use is `user id` numbers from real-time feeds on Github. The script `github_events.py` will pull the most recent 300 events from the [Github Public Timeline feed](https://github.com/timeline) (we'll be accessing data using their API). From this, we pull out the `user ids` associated with each event. We run the script below and display some output:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "%run github_events.py\n", "\n", "\n", "print \"Some User ids from the latest events (push, star, fork etc.) on Github.\"\n", "print ids[:10]\n", "print\n", "print \"Number of unique ids found: \", ids.shape[0]\n", "print \"Largest user id: \", ids.max()" ], "language": "python", "metadata": {}, "outputs": [ { "ename": "KeyError", "evalue": "'id'", "output_type": "pyerr", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mKeyError\u001b[0m Traceback (most recent call last)", "\u001b[1;32mc:\\Python27\\lib\\site-packages\\IPython\\utils\\py3compat.pyc\u001b[0m in \u001b[0;36mexecfile\u001b[1;34m(fname, glob, loc)\u001b[0m\n\u001b[0;32m 169\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 170\u001b[0m \u001b[0mfilename\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mfname\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 171\u001b[1;33m \u001b[1;32mexec\u001b[0m \u001b[0mcompile\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mscripttext\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfilename\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'exec'\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mglob\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mloc\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 172\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 173\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mexecfile\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfname\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m*\u001b[0m\u001b[0mwhere\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;32mC:\\Users\\Cameron\\Dropbox\\My Work\\Probabilistic-Programming-and-Bayesian-Methods-for-Hackers\\Chapter2_MorePyMC\\github_events.py\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 22\u001b[0m \u001b[0mdata\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mloads\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mr\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtext\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 23\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mevent\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mdata\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 24\u001b[1;33m \u001b[0mids\u001b[0m\u001b[1;33m[\u001b[0m\u001b[0mk\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m(\u001b[0m \u001b[0mevent\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"actor\"\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m\"id\"\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 25\u001b[0m \u001b[0mk\u001b[0m\u001b[1;33m+=\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 26\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;31mKeyError\u001b[0m: 'id'" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Some User ids from the latest events (push, star, fork etc.) on Github.\n", "[1524995 1978503 1926860 1524995 3707208 374604 37715 770655 502701\n", " 4349707]" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "\n", "\n" ] }, { "output_type": "stream", "stream": "stdout", "text": [ "Number of unique ids found: 300\n", "Largest user id: " ] }, { "output_type": "stream", "stream": "stdout", "text": [ " 2085773151\n" ] } ], "prompt_number": 1 }, { "cell_type": "code", "collapsed": false, "input": [ "figsize(12.5,3)\n", "plt.hist( ids, bins = 45, alpha = 0.9)\n", "plt.title(\"Histogram of %d Github User ids\"%ids.shape[0] );\n", "plt.xlabel(\"User id\")\n", "plt.ylabel(\"Frequency\");" ], "language": "python", "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'figsize' is not defined", "output_type": "pyerr", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mfigsize\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m12.5\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;36m3\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 2\u001b[0m \u001b[0mplt\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mhist\u001b[0m\u001b[1;33m(\u001b[0m \u001b[0mids\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mbins\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;36m45\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0malpha\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;36m0.9\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[0mplt\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtitle\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"Histogram of %d Github User ids\"\u001b[0m\u001b[1;33m%\u001b[0m\u001b[0mids\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m)\u001b[0m\u001b[1;33m;\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[0mplt\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mxlabel\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"User id\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[0mplt\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mylabel\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"Frequency\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m;\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;31mNameError\u001b[0m: name 'figsize' is not defined" ] } ], "prompt_number": 2 }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are some users with multiple events, but we are only interested in unique `user ids`, hence why we have less than 300 ids. Above I printed the largest `user id`. Why is this important? If Github assigns `user ids` serially, which is a fair assumption, then we **know** that there are certainly more users than that number. Remember, we are only looking at less than 300 individuals out of a much larger population, so it is unlikely that we would have sampled the most recent sign-up. \n", "\n", "At best, we can only estimate the total number of sign-ups. Let's get more familar with this problem. Consider a fictional website that we wish to estimate the number of users:\n", "\n", "1. Suppose we sampled only two individuals in a similar manner to above: the ids are 3 and 10 respectively. Would it be likely that the website has millions of users? Not very. Alternatively, it is more likely the website has less than 100 users. \n", "\n", "2. On the other hand, if the ids were 3 and 34 989, we might be more willing to guess there could possibly thousands, or millions of user sign-ups. We are not very confident in an estimate, due to a lack of data.\n", "\n", "3. If we sample thousands of users, and the maximum `user id` is still 34 989, then is seems likely that the total number of sign ups is near 35 000. Hence our inference should be more confident.\n", "\n", "\n", "We make the following assumption:\n", "\n", "**Assumption:** Every user is equally likely to perform an event. Clearly, looking at the above histogram, this assumption is violated. The participation on Github is skewed towards early adopters, likely as these early-adopting individuals have a) more invested in Github, and b) saw the value earlier in Github, therefore are more interested in it. The distribution is also skewed towards new sign ups, who likely signed up just to push a project. \n", "\n", "To create a Bayesian model of this is easy. Based on the above assumption, all `user_ids` sampled are from a `DiscreteUniform` model, with lower bound 1 and an unknown upperbound. We don't have a strong belief about what the upper-bound might be, but we do know it will be larger than `ids.max()`. \n", "\n", "Working with such large numbers can cause numerical problem, hence we will scale everything by a million. Thus, instead of a `DiscreteUniform`, we will used a `Uniform`:" ] }, { "cell_type": "code", "collapsed": false, "input": [ "FACTOR = 1000000.\n", "\n", "import pymc as mc\n", "\n", "upper_bound = mc.Uniform( \"n_sign_ups\", ids.max()/FACTOR, (ids.max())/FACTOR + 1)\n", "obs = mc.Uniform(\"obs\", 0, upper_bound, value = ids/FACTOR, observed = True )\n", "\n", "#code to be examplained in Chp. 3.\n", "mcmc = mc.MCMC([upper_bound, obs] )\n", "mcmc.sample( 100000, 45000)" ], "language": "python", "metadata": {}, "outputs": [ { "ename": "ZeroProbability", "evalue": "Stochastic obs's value is outside its support,\n or it forbids its parents' current values.", "output_type": "pyerr", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mZeroProbability\u001b[0m Traceback (most recent call last)", "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 5\u001b[0m \u001b[0mupper_bound\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmc\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mUniform\u001b[0m\u001b[1;33m(\u001b[0m \u001b[1;34m\"n_sign_ups\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mids\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmax\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m/\u001b[0m\u001b[0mFACTOR\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mids\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmax\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m/\u001b[0m\u001b[0mFACTOR\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 6\u001b[1;33m \u001b[0mobs\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmc\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mUniform\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"obs\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;36m0\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mupper_bound\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mvalue\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mids\u001b[0m\u001b[1;33m/\u001b[0m\u001b[0mFACTOR\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mobserved\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mTrue\u001b[0m \u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 7\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 8\u001b[0m \u001b[1;31m#code to be examplained in Chp. 3.\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;32mc:\\Python27\\lib\\site-packages\\pymc\\distributions.pyc\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, *args, **kwds)\u001b[0m\n\u001b[0;32m 268\u001b[0m \u001b[0mrandom\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdebug_wrapper\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mrandom\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 269\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 270\u001b[1;33m \u001b[0mStochastic\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__init__\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mlogp\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mlogp\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mrandom\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mrandom\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mlogp_partial_gradients\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlogp_partial_gradients\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mdtype\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0marg_dict_out\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 271\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 272\u001b[0m \u001b[0mnew_class\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mname\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;32mc:\\Python27\\lib\\site-packages\\pymc\\PyMCObjects.pyc\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, logp, doc, name, parents, random, trace, value, dtype, rseed, observed, cache_depth, plot, verbose, isdata, check_logp, logp_partial_gradients)\u001b[0m\n\u001b[0;32m 714\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mcheck_logp\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 715\u001b[0m \u001b[1;31m# Check initial value\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 716\u001b[1;33m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0misinstance\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlogp\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mfloat\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 717\u001b[0m \u001b[1;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"Stochastic \"\u001b[0m \u001b[1;33m+\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;34m\"'s initial log-probability is %s, should be a float.\"\u001b[0m \u001b[1;33m%\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mlogp\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__repr__\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 718\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;32mc:\\Python27\\lib\\site-packages\\pymc\\PyMCObjects.pyc\u001b[0m in \u001b[0;36mget_logp\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m 846\u001b[0m \u001b[1;32mraise\u001b[0m \u001b[0mZeroProbability\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0merrmsg\u001b[0m \u001b[1;33m+\u001b[0m \u001b[1;34m\"\\nValue: %s\\nParents' values:%s\"\u001b[0m \u001b[1;33m%\u001b[0m \u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_value\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_parents\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mvalue\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 847\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 848\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0mZeroProbability\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0merrmsg\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 849\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 850\u001b[0m \u001b[1;32mreturn\u001b[0m \u001b[0mlogp\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n", "\u001b[1;31mZeroProbability\u001b[0m: Stochastic obs's value is outside its support,\n or it forbids its parents' current values." ] } ], "prompt_number": 3 }, { "cell_type": "code", "collapsed": false, "input": [ "from scipy.stats.mstats import mquantiles\n", "\n", "samples = mcmc.trace(\"n_sign_ups\")[:]\n", "\n", "hist(samples, bins = 100, \n", " label = \"Uniform prior\",\n", " normed=True, alpha = 0.8, \n", " histtype=\"stepfilled\", color = \"#7A68A6\" );\n", "\n", "quantiles_mean = np.append( mquantiles( samples, [0.05, 0.5, 0.95]), samples.mean() )\n", "print \"Quantiles: \", quantiles_mean[:3]\n", "print \"Mean: \", quantiles_mean[-1]\n", "plt.vlines( quantiles_mean, 0, 33, \n", " linewidth=2, linestyles = [\"--\", \"--\", \"--\", \"-\"],\n", " )\n", "plt.title(\"Posterior distribution of total number of Github users\" )\n", "plt.xlabel(\"number of users (in millions)\")\n", "plt.legend()\n", "plt.xlim( ids.max()/FACTOR - 0.01, ids.max()/FACTOR + 0.12 );" ], "language": "python", "metadata": {}, "outputs": [ { "ename": "NameError", "evalue": "name 'mcmc' is not defined", "output_type": "pyerr", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)", "\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m()\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mscipy\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mstats\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmstats\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mmquantiles\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 2\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 3\u001b[1;33m \u001b[0msamples\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmcmc\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mtrace\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"n_sign_ups\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 4\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 5\u001b[0m hist(samples, bins = 100, \n", "\u001b[1;31mNameError\u001b[0m: name 'mcmc' is not defined" ] } ], "prompt_number": 4 }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above we have plotted the posterior distribution. Note that there is no posterior probability assigned to the number of users being less than `ids.max()`. That is good, as it would be an impossible situation. \n", "\n", "The three dashed vertical bars, from left to right, are the 5%, 50% and 95% quantitle lines. That is, 5% of the probability is before the first line, 50% before the second and 95% before the third. The 50% quantitle is also know as the *median* and is a better measure of centrality than the mean for heavily skewed distributions like this one. The solid line is the posterior distribution's mean.\n", "\n", "So what can we say? Using the data above, there is a 95% chance that there are less than 4.4 million users, and is probably around 4.36 million users. I was wondering how accurate this figure was. At the time of this writing, it seems a bit high considering only five months prior the number was at 3 million:\n", "\n", "\n", "

Last night @github crossed the 3M user mark #turntup

— Rick Bradley (@rickbradley) January 15, 2013
\n", "\n", "\n", "\n", "I thought perhaps the `user_id` parameter was being used liberally to users/bots/changed names etc, so I contacted Github Support about it:\n", "\n", "

@cmrn_dp User IDs are assigned to new users/organizations, whether they\u2019re controlled by humans, groups, or bots.

— GitHub Support (@GitHubHelp) May 6, 2013
\n", "\n", "\n", "\n", "So we may be overestimating by including organizations, which perhaps should not be counted as users. TODO: estimate the number of organizations. Any takers?" ] }, { "cell_type": "code", "collapsed": false, "input": [], "language": "python", "metadata": {}, "outputs": [] } ], "metadata": {} } ] }