{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Simple Processing\n", "\n", "## Goals:\n", "\n", " - Handle gathered data with dict() and zip()\n", " - Find data relation with scipy\n", " - Get essential information like standard deviation $\\sigma$ and distributions $\\delta$\n", " - Linear correlation: what's that, when can help\n", " - Plotting\n", "\n", "## modules:\n", " - numpy, scipy, scipy.stats.stats, collections, random, time\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# numpy, scipy, scipy.stats.stats, collections, random, time\n", "from collections import Counter, defaultdict\n", "from itertools import chain, combinations\n", "from time import localtime\n", "import json\n", "\n", "from scipy import std, mean\n", "from scipy.stats.stats import pearsonr\n", "from matplotlib import pyplot as plt\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This lesson requires the following modules\n", "from __future__ import unicode_literals, print_function, division\n", "\n", "# And %pylab features\n", "%pylab\n", "\n", "# Show plots into browser\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Chicken Paradox\n", "\n", "*\"According to latest statistics, \n", "it appears that you eat one chicken per year: \n", "and, if that doesn't fit your budget, \n", "you'll fit into statistic anyway, \n", "because someone will eat two.\"*\n", "\n", " C. A. Salustri (1871 – 1950)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to dismantle the Chicken Paradox (Exercise)\n", "\n", " - Gather data!\n", " - Write the following function using our parsing strategy and \n", " - Gather 10 seconds of ping output\n", " - Hints: reuse sh function \n", " - Hints: slice and filter lists using comprehension\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#\n", "# Exercise: use this frame for the exercise\n", "#\n", "def ping_rtt(seconds=10):\n", " \"\"\"@return: a list of ping RTT\"\"\"\n", " from course import sh\n", " # get sample output\n", " # find a solution in ipython\n", " # test and paste the code\n", " raise NotImplementedError\n", "\n", "!ping -c3 www.google.it || ping -n3 www.google.it" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# %load course/ping_rtt.py\n", "__author__ = 'rpolli'\n", "\n", "\n", "def ping_rtt():\n", " \"\"\"\n", " goal: slicing data\n", " goal: using zip to transpose data\n", " \"\"\"\n", " from course import sh\n", " import sys\n", " cmd = \"ping -c10 www.google.it\"\n", " if 'win' in sys.platform:\n", " cmd = \"ping -n10 www.google.it\"\n", "\n", " ping_output = sh(cmd)\n", " filter_lines = (x.split() for x in ping_output if 'time' in x)\n", " if 'win' in sys.platform:\n", " ping_output = [x[6::2] for x in filter_lines]\n", " else:\n", " ping_output = [x[-4:-1:2] for x in filter_lines]\n", " ttl, rtt = zip(*ping_output)\n", " return [float(x.split(\"=\")[1]) for x in rtt]\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rtt = ping_rtt()\n", "print(*rtt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Distributions: set, defaultdict\n", "\n", "A distribution or $\\delta$ shows the frequency of events, like how many people ate $x$ chickens;\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# We can generate a distribution using\n", "from collections import Counter\n", "distro = Counter(rtt)\n", "print(\"Un-bucketed distribution:\", distro)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# If we split data in 1-second buckets....\n", "distro = Counter(int(i) for i in rtt) \n", "print(\"One second bucket distribution:\",distro)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Exercise: which is the behavior of defaultdict?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# defaultdict can be used to implement more complex behaviors.\n", "from collections import defaultdict\n", "distro = defaultdict(int)\n", "\n", "# This works even if rtt is a generator\n", "for x in rtt:\n", " distro[x] += 1\n", "print(distro)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# or list.count if rtt is in-memory\n", "distro2 = {x: rtt.count(x) for x in set(rtt)}\n", "print(distro2)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Exercise: use this cell to benchmark those three approaches while increasing the size of rtt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Standard Deviation: scipy\n", "\n", " - Standard deviation or $\\sigma$ formula is\n", " \n", " $\\sigma^{2}(X) := \\frac{ \\sum(x-\\bar{x})^{2} }{n} $ \n", " \n", " - $\\sigma$ tells if $\\delta$ is fair or not, and how much the mean ($\\bar{x}$) is representative\n", " \n", " - matplotlib.mlab.normpdf is a smooth function approximating the histogram\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from scipy import std, mean\n", "fair = [1, 1] # chickens\n", "unfair = [0, 2] # chickens\n", "\n", "# Same mean!\n", "assert mean(fair) == mean(unfair)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Use standard deviation!\n", "print(\"is unfair?\", std(fair)) # 0\n", "print(\"is unfair?\", std(unfair)) # 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#Check your computed values vs the $\\sigma$ returned by ping\n", "\"\"\"\n", " goal: remember to convert to numeric / float\n", " goal: use scipy\n", " goal: check stdev\n", "\"\"\"\n", "from scipy import std, mean # max, min are builtin\n", "rtt = ping_rtt()\n", "\n", "fmt_s = 'stdev: {:0.2}, mean: {}, min: {}, max: {}'\n", "rtt_std, rtt_mean = std(rtt), mean(rtt)\n", "rtt_max, rtt_min = max(rtt), min(rtt)\n", "print(fmt_s.format(rtt_std, rtt_mean, rtt_max, rtt_min))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Time Distributions: Exercise\n", "\n", " - parse the provided data/maillog file in ipython using its !magic\n", " - get an hourly email $\\delta$\n", " - expected output:\n", " \n", "```\n", "time_d = { # mail delivered (eg. removed) between\n", " 0: xxx # 00:00 - 00:59\n", " 1: xxx # 01:00 - 01:59\n", " ...\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# The data/maillog file contains\n", "ret = !cat ../data/maillog\n", "print(*ret[:3], sep='\\n')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Use the following cell to solve the exercise.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#\n", "# Time distribution exercise frame\n", "#\n", "solution = \"cmV0ID0gIWdyZXAgcmVtb3ZlZCBkYXRhL21haWxsb2cKaG91cnMgPSBbeFs6Ml0gZm9yIHggaW4g\\ncmV0LmZpZWxkcygyKV0K\"\n", "from course import show_solution\n", "show_solution(solution)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ret = !grep removed ../data/maillog\n", "#print(ret.fields(2))\n", "hours = [int(x[:2]) for x in ret.fields(2)]\n", "print(hours[:10], \"..\")\n", "time_d = Counter(hours)\n", "print(\"The distribution is\",time_d.viewitems())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Exercise: find the 3 and 4 most common values of time_d" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting distributions\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# To plot data..\n", "from matplotlib import pyplot as plt\n", "# and set the interactive mode\n", "plt.ion()\n", "\n", "# Plotting an histogram...\n", "frequency, slots, _ = hist(hours)\n", "plt.title(\"Hourly distribution\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# .. returns a\n", "from collections import OrderedDict\n", "distribution = OrderedDict(zip(slots,\n", " frequency))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Print it nicely with\n", "import json\n", "print(json.dumps(distribution, indent=1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Size distribution: Exercise\n", "\n", " - Create a size $\\delta$ from the previous maillog using hist(..., bins=...)\n", " - Hint: help(hist)\n", " \n", "```\n", " size_d = { # mail size between\n", " 0: xxx # 0 - 10k\n", " 1: xxx # 10k - 20k\n", " ..\n", " }\n", "```\n", "\n", " - Homework: Use the size $\\delta$ to find size\\_mean and size\\_sigma and compare with $\\sigma$ and mean evaluated from the original data-series\n", " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "#\n", "# Exercise frame\n", "#" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Simulating data with $\\sigma$ and $\\bar{x}$\n", "\n", " - Mean and a stdev are useful starting point to simulate data using the gaussian distribution.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# Get the sigma_size from maillog\n", "solution = b'CnJldCA9ICFncmVwIHNpemUgZGF0YS9tYWlsbG9nCnNpemVzID0gW2ludCh4WzU6LTFdKSBmb3Ig\\neCBpbiByZXQuZmllbGRzKDcpXQoK\\n'\n", "from course import show_solution\n", "# show_solution(solution)\n", "def maillog_size():\n", " ret = !grep size ../data/maillog\n", " sizes = [int(x[5:-1]) for x in ret.fields(7)]\n", " return sizes\n", "\n", "sizes = maillog_size()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "frequency, slots, _ = plt.hist(sizes, bins=5)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# We can use our time_d to simulate the load during the day\n", "from time import localtime\n", "hour = localtime().tm_hour\n", "mail_per_minute = time_d[hour] / 60 # minutes in hour\n", "print(\"Estimated frequency is\", mail_per_minute)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# And sizes to create a mail load generator\n", "mean_size, sigma_size = mean(sizes), std(sizes)\n", "print(mean_size, sigma_size)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# A mail load generator creating attachments of a given size...\n", "from random import gauss\n", "def mail_size_g():\n", " mu = min(sizes)\n", " x = gauss(mean_size, sigma_size) # a random number\n", " # as gaussian can give negative numbers\n", " # we should ceil up (or skip) negative values\n", " return x if x > mu else mu" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# How is our estimation far from reality?\n", "for emails in 5000, 10000, 100000:\n", " X = [mail_size_g() for x in xrange(emails)]\n", " print(emails, mean(X) / mean_size, std(X) / sigma_size)\n", "\n", "# Homework: write a script to simulate the load of your mailserver and tweak your mail_size_g\n", "# Hint: create a set of attachment files: 1k.out, 10k.out, 100k.out, 1000k.out, 5000k.out\n", "# Hint: use mail_size_g() to select one of those files instead of creating a given email" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To better simulate mail size, consider using a Cauchy distribution.\n", "\n", "See www.ieee802.org/16/tgm/contrib/C80216m-07_122.pdf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Linear Correlation\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Let's plot the following datasets\n", "# taken from a 4-hour distribution\n", "mail_sent = [1, 5, 500, 250, 100, 7]\n", "kB_s = [70, 300, 29000, 12500, 450, 500]\n", "\n", "# A scatter plot can suggest relations\n", "# between data\n", "plt.scatter(mail_sent, kB_s)\n", "plt.title(\"I am a SCATTER plot!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Linear Correlation\n", "\n", "The Pearson Coefficient $\\rho$ is a relation indicator.\n", "\n", "- no relation\n", "- direct relation (both dataset increase together)\n", "- inverse relation (one increase as the other decrease)\n", "\n", "\\begin{equation}\n", "\\rho(X,Y) = \\frac{\n", " \\sum (x-\\bar{x})(y-\\bar{y})\n", " }{\n", " \\sqrt{\\sum (x - \\bar{x})^{2}}\\sqrt{\\sum (y - \\bar{y})^{2}}\n", "}\n", "\\end{equation}\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from scipy.stats.stats import pearsonr\n", "ret = pearsonr(mail_sent, kB_s)\n", "print(ret)\n", "#>(0.9823, 0.0004)\n", "correlation, probability = ret" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# You must (scatter) plot!\n", "\n", "## $\\rho$ does not detect non-linear correlation\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Combinations\n", "\n", "*a way of selecting members from a grouping, such that \n", "the order of selection does not matter*\n", "\n", " ## Combinations vs Permutations\n", " - expressed by the binomial coefficient $\\binom{n}{k}$\n", " - different from Permutations where order matters (expressed by factorial $n!$)\n", " - Permutations are more than Combinations - as the \"Per\" prefix says :D\n", " \n", " \n", " ## Combining 4 suites, 2 at a time\n", " \n", " 1. ('♠', '♡')\n", " 2. ('♠', '♢')\n", " 3. ('♠', '♣')\n", " 4. ('♡', '♢')\n", " 5. ('♡', '♣')\n", " 6. ('♢', '♣')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Given a table with many data series\n", "from course import table\n", "#table = {...\n", "# 'cpu_usr': [10, 23, 55, ..],\n", "# 'byte_in': [2132, 3212, 3942, ..], \n", "# }\n", "print(*table.keys(),sep='\\n')\n", "\n", "# We can combine all their names with\n", "from itertools import combinations\n", "list(combinations(table,2))[:10]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Netfishing correlation\n", "\n", "When looking for interesting data, we can calculate correlation between every combination of data series and concentrate on those over a given threshold " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "\n", "# We can try every combination between data series \n", "# and check if there's some correlation\n", "from itertools import combinations\n", "from scipy.stats.stats import pearsonr\n", "from course import table\n", "\n", "for k1, k2 in combinations(table, 2):\n", " r_coeff, probability = pearsonr(table[k1], table[k2])\n", " \n", " # I'm *still* not interested in data under this threshold\n", " if r_coeff < 0.6:\n", " continue\n", " \n", " print(\"linear correlation between {:8} and {:8} is {:3}\".format(\n", " k1, k2, r_coeff))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plotting correlation\n", "\n", "Coloring scatter-plot gives us one more dimension in a 2d plot: time!\n", "\n", "We have to mark our ordered data-series with a tick-series. eg.\n", "\n", "\n", "|tick| 0 | 0 | 0 | 1 | 1 | .. |\n", "|---|---|---|---|--|--|--| --|\n", "|cpu_usr| 10 | 23| 55| 33 | 10 | .. |\n", "|byte_in| 2132 | 3212| 3942 | 321 | 42| ..| \n", "\n", "Elements with the same tick are in the same bucket. \n", "\n", "We will now group our series in 3 buckets of 8-hours. We'll use the ```itertools.chain``` function which\n", "flattens a list of iterators.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from itertools import chain # magic function to flatten a list of iterators\n", "\n", "# Dividing samples in 3 buckets.\n", "BUCKETS = [\"morning\", \"afternoon\", \"night\"]\n", "samples = len(table['byte_in'])\n", "\n", "# scale samples on 24 hours\n", "ticks = list(chain(*(\n", " [x] * (samples // len(BUCKETS)) # generates a [ 0 ] * bucket_size list.\n", " for x in range(0, 24, 24 // len(BUCKETS)) # rescale all samples on 24 hours\n", " ))\n", " )\n", "print(ticks)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Any doubts? Just check the ticks distribution ;)\n", "print(Counter(ticks))\n", "\n", "# Exercise: use json.dumps to prettify the distribution." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Create a formatter associating a label to every color point:\n", "# - we just take into account the tick value, ignoring other values\n", "# - the tick is divided by 8 (the bucket_size) to implement the following mapping\n", "# - [0, 8) -> morning\n", "# - [8, 16) -> afternoon \n", "# - [16, 24) -> night\n", "formatter = plt.FuncFormatter(lambda tick, *ignored_args: BUCKETS[tick//8])\n", "\n", "plt.scatter(table['byte_in'], table['csw'], c=ticks)\n", "plt.colorbar(ticks = range(0, 28, 8), format=formatter)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from scipy import arange\n", "set(map(int, arange(0, 3, 1/samples)))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#netfishing_correlation_plot():\n", "# Consider using pandas.tools.plotting import scatter_matrix\n", "from itertools import combinations\n", "from scipy.stats.stats import pearsonr\n", "from course import table\n", "from matplotlib import pyplot as plt\n", "\n", "for (k1, v1), (k2,v2) in combinations(table.items(), 2):\n", " r_coeff, probability = pearsonr(table[k1], table[k2])\n", " if r_coeff < 0.6:\n", " continue\n", " plt.scatter(v1, v2, c=ticks)\n", " plt.xlabel(k1); plt.ylabel(k2)\n", " plt.title(\"r={:0.3f} p={:0.3f}\".format(r_coeff, probability))\n", " plt.colorbar(ticks = range(0, 28, 8), format=formatter)\n", " plt.show()\n", " # Uncomment the following lines to save the plot on disk\n", " # plt.savefig(\"colorbar_{}_{}.png\".format(k1, k2))\n", " # plt.close()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise solutions" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import print_function, unicode_literals\n", "def ping_rtt(seconds=10):\n", " \"\"\"\n", " goal: slicing data\n", " goal: using zip to transpose data\n", " \"\"\"\n", " import sys\n", " from course import sh\n", " cmd = \"ping -c{seconds} www.google.it\"\n", " if 'win' in sys.platform:\n", " cmd = \"ping -n{seconds} www.google.it\"\n", "\n", " ping_output = sh(cmd.format(seconds=seconds))\n", " # Filter out uninteresting lines\n", " ping_output = [x.replace(b\"=\", b\" \") for x in ping_output if b'from' in x]\n", " if 'win' in sys.platform:\n", " ping_output = [x.split()[6::2] for x in ping_output]\n", " else:\n", " ping_output = [x.split()[-4:-1:2] for x in ping_output]\n", " ttl, rtt = zip(*ping_output)\n", " return map(float, rtt)\n", "\n", "print(*ping_rtt(3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Exercise II solution\n", "# deliveder emails are like the following\n", "#May 14 16:00:04 rpolli postfix/qmgr[122]: 4DC3DA: removed\"\n", "\n", "ret = !grep removed maillog # get the interesting lines\n", "\n", "ts = ret.fields(2) # find the timestamp (3rd column)\n", "\n", "hours = [ int(ts) for x in ts ]\n", "time_d = {x: hours.count(x) for x in set(hours)}" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 0 }