{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" }, "colab": { "name": "Week 1 - Differential Privacy.ipynb", "version": "0.3.2", "provenance": [] } }, "cells": [ { "cell_type": "markdown", "metadata": { "id": "wyknZJh8HvN5", "colab_type": "text" }, "source": [ "from: https://github.com/udacity/private-ai \n", "\n", "## Lesson: Toy Differential Privacy - Simple Database Queries" ] }, { "cell_type": "markdown", "metadata": { "id": "wFGVxbK2HvN7", "colab_type": "text" }, "source": [ "In this section we're going to play around with Differential Privacy in the context of a database query. The database is going to be a VERY simple database with only one boolean column. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques are at our disposal to ensure various levels of privacy\n", "\n", "\n", "### First We Create a Simple Database\n", "\n", "Step one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database." ] }, { "cell_type": "code", "metadata": { "id": "5u-tG3hfHvN8", "colab_type": "code", "colab": {} }, "source": [ "import numpy as np\n", "\n", "# the number of entries in our database\n", "num_entries = 5000\n", "\n", "db = np.random.randint(0,2, size=(num_entries))\n", "db" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "HBjNwZkyHvOA", "colab_type": "text" }, "source": [ "## Project: Generate Parallel Databases\n", "\n", "Key to the definition of differenital privacy is the ability to ask the question \"When querying a database, if I removed someone from the database, would the output of the query be any different?\". Thus, in order to check this, we must construct what we term \"parallel databases\" which are simply databases with one entry removed. \n", "\n", "In this first project, I want you to create a list of every parallel database to the one currently contained in the \"db\" variable. Then, I want you to create a function which both:\n", "\n", "- creates the initial database (db)\n", "- creates all parallel databases" ] }, { "cell_type": "code", "metadata": { "id": "0NA0P_6IHvOB", "colab_type": "code", "colab": {} }, "source": [ "# try project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "N3bevY-zHvOD", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "gjYtYtuDHvOG", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "8AHifVCoHvOI", "colab_type": "code", "colab": {} }, "source": [ "def create_db_and_parallels(num_entries):\n", " db = np.random.randint(0,2, size=(num_entries))\n", " pdbs = [np.concatenate([db[:i],db[i+1:]]) for i in range(num_entries)]\n", " return db, pdbs" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ktnjdW8QHvOK", "colab_type": "code", "colab": {} }, "source": [ "def create_db_and_parallels(num_entries):\n", " # complete the function that returns db, pdbs\n", " # where db is a array like above\n", " # pdbs is a list of parallel arrays\n", " return db, pdbs" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "CD2vrFcfHvOM", "colab_type": "text" }, "source": [ "# Lesson: Towards Evaluating The Differential Privacy of a Function\n", "\n", "Intuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking \"private\" information. As mentioned previously, this is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). So, in order to evaluate how much privacy is leaked, we're going to iterate over each person in the database and measure the difference in the output of the query relative to when we query the entire database. \n", "\n", "Just for the sake of argument, let's make our first \"database query\" a simple sum. Aka, we're going to count the number of 1s in the database." ] }, { "cell_type": "code", "metadata": { "id": "o2MWGD1uHvON", "colab_type": "code", "colab": {} }, "source": [ "db, pdbs = create_db_and_parallels(5000)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "nQ3WqJbwHvOP", "colab_type": "code", "colab": {} }, "source": [ "def query(db):\n", " return db.sum()" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "3IdGLsUJHvOR", "colab_type": "code", "colab": {} }, "source": [ "full_db_result = query(db)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "89aXZ95mHvOU", "colab_type": "code", "colab": {} }, "source": [ "sensitivity = 0\n", "for pdb in pdbs:\n", " pdb_result = query(pdb)\n", " \n", " db_distance = np.abs(pdb_result - full_db_result)\n", " \n", " if(db_distance > sensitivity):\n", " sensitivity = db_distance" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "LAJxg9knHvOW", "colab_type": "code", "colab": {} }, "source": [ "sensitivity" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "WrmPhSJbHvOY", "colab_type": "text" }, "source": [ "# Project - Evaluating the Privacy of a Function\n", "\n", "In the last section, we measured the difference between each parallel db's query result and the query result for the entire database and then calculated the max value (which was 1). This value is called \"sensitivity\", and it corresponds to the function we chose for the query. Namely, the \"sum\" query will always have a sensitivity of exactly 1. However, we can also calculate sensitivity for other functions as well.\n", "\n", "Let's try to calculate sensitivity for the \"mean\" function." ] }, { "cell_type": "code", "metadata": { "id": "6ivpiC03HvOZ", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "dlEzFmfNHvOf", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "HYSKkESDHvOg", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "lWabur5IHvOi", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "DQHtYRGUHvOk", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "-t-q0H2_HvOl", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "nQhNHGzSHvOo", "colab_type": "text" }, "source": [ "Wow! That sensitivity is WAY lower. Note the intuition here. \"Sensitivity\" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database (which is much smaller). Thus, \"mean\" is a VASTLY less \"sensitive\" function (query) than SUM." ] }, { "cell_type": "markdown", "metadata": { "id": "5GEVtgIgHvOp", "colab_type": "text" }, "source": [ "# Project: Calculate L1 Sensitivity For Threshold\n", "\n", "In this first project, I want you to calculate the sensitivty for the \"threshold\" function. \n", "\n", "- First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold.\n", "- Then, I want you to create databases of size 10 and threshold of 5 and calculate the sensitivity of the function. \n", "- Finally, re-initialize the database 10 times and calculate the sensitivity each time." ] }, { "cell_type": "code", "metadata": { "id": "uiCtpgHSHvOp", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "YrMrVkFvHvOx", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "41MfYWDJHvOz", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "IOUV81kYHvO2", "colab_type": "text" }, "source": [ "# Lesson: A Basic Differencing Attack\n", "\n", "Sadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows.\n", "\n", "Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person!\n", "\n", "# Project: Perform a Differencing Attack on Row 10\n", "\n", "In this project, I want you to construct a database and then demonstrate how you can use two different sum queries to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows)" ] }, { "cell_type": "code", "metadata": { "id": "vTwRgdsLHvO2", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "JVAlDZCXHvO6", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "GCTyQZtfHvO8", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "hrGniUaTHvO_", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "SzpW4EvIHvPB", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "kjeDxu_hHvPE", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "_VqPGMoMHvPF", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "VlQ9vMLBHvPI", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "gMCf4-i1HvPK", "colab_type": "text" }, "source": [ "# Project: Local Differential Privacy\n", "\n", "As you can see, the basic sum query is not differentially private at all! In truth, differential privacy always requires a form of randomness added to the query. Let me show you what I mean.\n", "\n", "### Randomized Response (Local Differential Privacy)\n", "\n", "Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question):\n", "\n", "- Flip a coin 2 times.\n", "- If the first coin flip is heads, answer honestly\n", "- If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)!\n", "\n", "Thus, each person is now protected with \"plausible deniability\". If they answer \"Yes\" to the question \"have you committed X crime?\", then it might becasue they actually did, or it might be becasue they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the \"true statistics\" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged wtih 50% (a coin flip) is 60% which is the result we obtained. \n", "\n", "However, it should be noted that, especially when we only have a few samples, this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results. \n", "\n", "Let's implement this local DP for our database before!" ] }, { "cell_type": "code", "metadata": { "id": "_KFyc1EHHvPL", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "BqlyvGLcHvPM", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "5Z14ByVRHvPO", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Hnc75QO6HvPQ", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "kNwhQrbXHvPR", "colab_type": "code", "colab": {} }, "source": [ "# try different num_entries and see what happens." ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "-iMyPau7HvPT", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "vV9fLEc4HvPV", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ZQIZuWshHvPW", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "O36Qw7_0HvPY", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "xjsf_NzAHvPZ", "colab_type": "text" }, "source": [ "# Project: Varying Amounts of Noise\n", "\n", "In this project, I want you to augment the randomized response query (the one we just wrote) to allow for varying amounts of randomness to be added. Specifically, I want you to bias the coin flip to be higher or lower and then run the same experiment. \n", "\n", "Note - this one is a bit tricker than you might expect. You need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the \"augmented_result\" variable)." ] }, { "cell_type": "code", "metadata": { "id": "-FgyrgKSHvPa", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "cCfyHTwHHvPb", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "RayEbgEXHvPc", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "G60Xf80BHvPe", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "8lzVrfT1HvPg", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ZLU-lSy-HvPk", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "bOEsW35FHvPl", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "FZOfC3MhHvPn", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "qQwSDzM1HvPq", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "1Z7gNgFLHvPt", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "kRGBGgOzHvPv", "colab_type": "text" }, "source": [ "# Lesson: The Formal Definition of Differential Privacy\n", "\n", "The previous method of adding noise was called \"Local Differentail Privacy\" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy. \n", "\n", "However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic.\n", "\n", "Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions." ] }, { "cell_type": "code", "metadata": { "id": "Ftn5x_P6HvPv", "colab_type": "code", "colab": {} }, "source": [ "db, pdbs = create_db_and_parallels(100)\n", "\n", "def query(db):\n", " return np.sum(db.astype('float'))\n", "\n", "def M(db):\n", " query(db) + noise\n", "\n", "query(db)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "a7EI9OPbHvPw", "colab_type": "text" }, "source": [ "So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy.\n", "\n", "![alt text](dp_formula.png \"Title\")" ] }, { "cell_type": "markdown", "metadata": { "id": "_FCrI7AKHvPx", "colab_type": "text" }, "source": [ "_Image From: \"The Algorithmic Foundations of Differential Privacy\" - Cynthia Dwork and Aaron Roth - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_" ] }, { "cell_type": "markdown", "metadata": { "id": "imL9V1BCHvPx", "colab_type": "text" }, "source": [ "This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed.\n", "\n", "Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called \"epsilon delta\" differential privacy.\n", "\n", "# Epsilon\n", "\n", "Let's unpack the intuition of this for a moment. \n", "\n", "Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the \"threshold\" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero.\n", "\n", "Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section).\n", "\n", "# Delta\n", "\n", "Delta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as \"epsilon zero but non-zero delta\" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta." ] }, { "cell_type": "markdown", "metadata": { "id": "6OBFO9l2HvPy", "colab_type": "text" }, "source": [ "# Lesson: How To Add Noise for Global Differential Privacy\n", "\n", "In this lesson, we're going to learn about how to take a query and add varying amounts of noise so that it satisfies a certain degree of differential privacy. In particular, we're going to leave behind the Local Differential privacy previously discussed and instead opt to focus on Global differential privacy. \n", "\n", "So, to sum up, this lesson is about adding noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold.\n", "\n", "There are two kinds of noise we can add - Gaussian Noise or Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question...\n", "\n", "### How much noise should we add?\n", "\n", "The amount of noise necessary to add to the output of a query is a function of four things:\n", "\n", "- the type of noise (Gaussian/Laplacian)\n", "- the sensitivity of the query/function\n", "- the desired epsilon (ε)\n", "- the desired delta (δ)\n", "\n", "Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta. We're going to focus on Laplacian noise. Laplacian noise is increased/decreased according to a \"scale\" parameter b. We choose \"b\" based on the following formula.\n", "\n", "b = sensitivity(query) / epsilon\n", "\n", "To generate Laplacian noise using numpy, see\n", "https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.laplace.html\n", "\n", "In other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now.\n", "\n", "### Querying Repeatedly\n", "\n", "- if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same)." ] }, { "cell_type": "markdown", "metadata": { "id": "_Ysg-oEaHvPz", "colab_type": "text" }, "source": [ "# Project: Create a Differentially Private Query\n", "\n", "In this project, I want you to take what you learned in the previous lesson and create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. Write a query for both \"sum\" and for \"mean\". Ensure that you use the correct sensitivity measures for both." ] }, { "cell_type": "code", "metadata": { "id": "33Yk-jFoHvP0", "colab_type": "code", "colab": {} }, "source": [ "# try this project here!" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "q6hcCu2mHvP1", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "nEet9vYvHvP2", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "gl38HjlsHvP5", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "LNQuhE7mHvP6", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ptP_nymbHvP7", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "ND4SvVJ1HvP9", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "c9BH6EKqHvQA", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "t2gK4r2BHvQB", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "B_2ACUTcHvQC", "colab_type": "text" }, "source": [ "# Lesson: Differential Privacy for Deep Learning\n", "\n", "So in the last lessons you may have been wondering - what does all of this have to do with Deep Learning? Well, these same techniques we were just studying form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning. \n", "\n", "Previously, we defined perfect privacy as \"a query to a database returns the same value even if we remove any person from the database\", and used this intuition in the description of epsilon/delta. In the context of deep learning we have a similar standard.\n", "\n", "Training a model on a dataset should return the same model even if we remove any person from the dataset.\n", "\n", "Thus, we've replaced \"querying a database\" with \"training a model on a dataset\". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have:\n", "\n", " 1. do we always know where \"people\" are referenced in the dataset?\n", " 2. neural models rarely never train to the same output model, even on identical data\n", "\n", "The answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where \"people\" are referenced, and thus how much your model would change if people were removed, is challenging.\n", "\n", "The answer to (2) is also an open problem - but several interesitng proposals have been made. We're going to focus on one of the most popular proposals, PATE.\n", "\n", "## An Example Scenario: A Health Neural Network\n", "\n", "First we're going to consider a scenario - you work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier. \n", "\n", "However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which DO have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals.\n", "\n", "- 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels)\n", "- 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints\n", "- 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a \"max\" function, where \"max\" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint.\n", "- 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final \"DP\" model.\n", "\n", "So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data.\n", "\n", "So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 \"teacher models\" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image." ] }, { "cell_type": "code", "metadata": { "id": "6-tGJFVsHvQF", "colab_type": "code", "colab": {} }, "source": [ "import numpy as np" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "BOq0d8ouHvQG", "colab_type": "code", "colab": {} }, "source": [ "num_teachers = 10 # we're working with 10 partner hospitals\n", "num_examples = 10000 # the size of OUR dataset\n", "num_labels = 10 # number of lablels for our classifier" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "eJiYSjceHvQI", "colab_type": "code", "colab": {} }, "source": [ "preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int).transpose(1,0) # fake predictions" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "pBH9fjqhHvQJ", "colab_type": "code", "colab": {} }, "source": [ "new_labels = list()\n", "for an_image in preds:\n", "\n", " label_counts = np.bincount(an_image, minlength=num_labels)\n", "\n", " epsilon = 0.1\n", " beta = 1 / epsilon\n", "\n", " for i in range(len(label_counts)):\n", " label_counts[i] += np.random.laplace(0, beta, 1)\n", "\n", " new_label = np.argmax(label_counts)\n", " \n", " new_labels.append(new_label)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Ps9Ev8rIHvQK", "colab_type": "code", "colab": {} }, "source": [ "new_labels" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "bS9AR9avHvQO", "colab_type": "text" }, "source": [ "# PATE Analysis\n", "\n", "see https://arxiv.org/pdf/1610.05755.pdf" ] }, { "cell_type": "code", "metadata": { "id": "OtiHH7K4HvQP", "colab_type": "code", "colab": {} }, "source": [ "labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2])\n", "counts = np.bincount(labels, minlength=10)\n", "query_result = np.argmax(counts)\n", "query_result" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "src65URyHvQR", "colab_type": "code", "colab": {} }, "source": [ "from syft.frameworks.torch.differential_privacy import pate" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "pDPhR9rIHvQS", "colab_type": "code", "colab": {} }, "source": [ "num_teachers, num_examples, num_labels = (100, 100, 10)\n", "preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) #fake preds\n", "indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers\n", "\n", "preds[:,0:10] *= 0\n", "\n", "data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)\n", "\n", "assert data_dep_eps < data_ind_eps\n", "\n" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "NKnepm3PHvQV", "colab_type": "code", "colab": {} }, "source": [ "data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)\n", "print(\"Data Independent Epsilon:\", data_ind_eps)\n", "print(\"Data Dependent Epsilon:\", data_dep_eps)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "2x8yvBEYHvQW", "colab_type": "code", "colab": {} }, "source": [ "preds[:,0:50] *= 0" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "Munlg76jHvQX", "colab_type": "code", "colab": {} }, "source": [ "data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20)\n", "print(\"Data Independent Epsilon:\", data_ind_eps)\n", "print(\"Data Dependent Epsilon:\", data_dep_eps)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "7iYfRyFRHvQZ", "colab_type": "text" }, "source": [ "# Where to Go From Here\n", "\n", "\n", "Read:\n", " - Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf\n", " - Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf\n", " - The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205\n", " \n", "Topics:\n", " - The Exponential Mechanism\n", " - The Moment's Accountant\n", " - Differentially Private Stochastic Gradient Descent\n", "\n", "Advice:\n", " - For deployments - stick with public frameworks!\n", " - Join the Differential Privacy Community\n", " - Don't get ahead of yourself - DP is still in the early days" ] }, { "cell_type": "code", "metadata": { "id": "M9q2qH0aHvQa", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "L7O5-dbMHvQb", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] }, { "cell_type": "markdown", "metadata": { "id": "5vsjAjRqHvQc", "colab_type": "text" }, "source": [ "# Section Project(Optional):\n", "\n", "For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below." ] }, { "cell_type": "code", "metadata": { "id": "KezhntJ_HvQc", "colab_type": "code", "colab": {} }, "source": [ "import torchvision.datasets as datasets\n", "mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "hFC2iHLkHvQe", "colab_type": "code", "colab": {} }, "source": [ "train_data = mnist_trainset.train_data\n", "train_targets = mnist_trainset.train_labels" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "NAytlH9-HvQf", "colab_type": "code", "colab": {} }, "source": [ "test_data = mnist_trainset.test_data\n", "test_targets = mnist_trainset.test_labels" ], "execution_count": 0, "outputs": [] }, { "cell_type": "code", "metadata": { "id": "LVtnZc4kHvQi", "colab_type": "code", "colab": {} }, "source": [ "" ], "execution_count": 0, "outputs": [] } ] }