{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. Gender Detection\n", "\n", "## Figuring out genders from names\n", "\n", "We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles. \n", "\n", " - [GenderDetector](https://pypi.python.org/pypi/gender-detector) can be run locally, but only provides \"male\", \"female\" or \"unknown\", and has a limitted number of names in the database. \n", " - [genderize.io](http://genderize.io) and [Gender API](http://gender-api.com) are web services that allow us to query names and return genders\n", " - Each of these services provides a \"probability\" that the gender is correct (so if \"Jamie\" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's \"female\" with a probability of 0.8)\n", " - They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the `count` would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries. \n", " \n", "The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting names to query\n", "\n", "First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the \n", "name \"John\" a thousand times - once will do. I'm going to loop through the csv we wrote out in the [last section](../xml_parsing.ipynb) and pull the fourth column, which contains our author name." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import os\n", "os.chdir(\"../data/pubdata\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "names = []\n", "with open(\"comp.csv\") as infile:\n", " for line in infile:\n", " names.append(line.split(\",\")[5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(len(names))\n", "\n", "names = set(names)\n", "print(len(names))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a function that does the same thing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_unique_names(csv_file):\n", " names = []\n", " with open(csv_file) as infile:\n", " for line in infile:\n", " names.append(line.split(\",\")[5])\n", " \n", " return set(names)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `set.union()` function will merge 2 sets into a single set, so we'll do this with our other datasets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "names = names.union(get_unique_names(\"bio.csv\"))\n", "\n", "print(len(all_names))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Getting genders from names\n", "\n", "### GenderDetector\n", "First up - `GenderDetector`. The usage is pretty straighforward:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from gender_detector import GenderDetector\n", "detector = GenderDetector('us')\n", "print(detector.guess(\"kevin\"))\n", "print(detector.guess(\"melanie\"))\n", "print(detector.guess(\"ajasja\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "gender_dict = {}\n", "counter = 0\n", "\n", "\n", "for name in names:\n", " try:\n", " gender = detector.guess(name)\n", " gender_dict[name] = gender\n", " except:\n", " print(name)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(len(gender_dict))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(sum([1 for x in gender_dict if gender_dict[x] == 'unknown']))\n", "print(sum([1 for x in gender_dict if gender_dict[x] != 'unknown']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Output datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import json\n", "\n", "with open(\"GenderDetector_genders.json\", \"w+\") as outfile:\n", " outfile.write(json.dumps(gender_dict, indent=4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Genderize.io\n", "\n", "This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote [a python package](https://pypi.python.org/pypi/Genderize) to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query:\n", "```\n", "[{u'count': 1037, u'gender': u'male', u'name': u'James', u'probability': 0.99},\n", " {u'count': 234, u'gender': u'female', u'name': u'Eva', u'probability': 1.0},\n", " {u'gender': None, u'name': u'Thunderhorse'}]\n", "```\n", "\n", "I will turn that into a dictionary of dictionaries, where the name is the key, and the other elements are stored under them. Eg:\n", "\n", "```\n", "{\n", "u'James':{\n", " u'count': 1037,\n", " u'gender': u'male',\n", " u'probability': 0.99\n", " },\n", "u'Eva':{\n", " u'count': 234, \n", " u'gender': u'female',\n", " u'probability': 1.0\n", " },\n", "u'Thunderhorse':{\n", " u'count: 0,\n", " u'gender': None,\n", " u'probability': None\n", " }\n", "}\n", "```\n", "\n", "**Note**:\n", "\n", "I've got an API key stored in a separate file called `api_keys.py` (that I'm not putting on git because you can't have my queries!) that looks like this:\n", "\n", "```\n", "genderize_key = \"s0m3numb3rsandl3tt3rs\"\n", "genderAPI_key = \"0th3rnumb3rsandl3tt3rs\"\n", "```\n", "\n", "You can get a key from both services for free, but you'll be limited in the number of queries you can make. Just make a similar file, or add them in below in place of the proper variables.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [], "source": [ "from api_keys import genderize_key\n", "from genderize import Genderize\n", "\n", "all_names = list(all_names)\n", "\n", "genderize = Genderize(\n", " user_agent='Kevin_Bonham',\n", " api_key=genderize_key)\n", "\n", "genderize_dict = {}\n", "\n", "for i in range(0, len(all_names), 10):\n", " \n", " query = all_names[i:i+10]\n", " genders = genderize.get(query)\n", "\n", " for gender in genders:\n", " n = gender[\"name\"]\n", " g = gender[\"gender\"]\n", " if g != None:\n", " p = gender[\"probability\"]\n", " c = gender[\"count\"]\n", " else:\n", " p = None\n", " c = 0\n", "\n", " genderize_dict[n] = {\"gender\":g, \"probability\":p, \"count\": c}\n", " \n", "with open(\"genderize_genders.json\", \"w+\") as outfile:\n", " outfile.write(json.dumps(genderize_dict, indent=4))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "print(len(genderize_dict))\n", "print(sum([1 for x in genderize_dict if genderize_dict[x][\"gender\"] == 'unknown']))\n", "print(sum([1 for x in genderize_dict if genderize_dict[x][\"gender\"] != 'unknown']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Gender-API\n", "\n", "This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the [website](http://gender-api.com). The vaule that gets returned comes in the form of a dictionary as well:\n", "\n", "```\n", "{u'accuracy': 99,\n", " u'duration': u'26ms',\n", " u'gender': u'male',\n", " u'name': u'markus',\n", " u'samples': 26354}\n", "```\n", "\n", "Which I'll convert to the same keys and value types used from genderize above (eg. \"probability\" instead of \"accuracy\", \"count\" instead of \"samples\", and `0.99` instead of `99`)," ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from api_keys import genderAPI_key\n", "import urllib2\n", "\n", "\n", "genderAPI_dict = {}\n", "counter = 0\n", "\n", "for i in range(counter, len(all_names), 20):\n", " \n", " names = all_names[i:i+20]\n", " query = \";\".join(names)\n", " \n", " data = json.load(urllib2.urlopen(\"https://gender-api.com/get?key={}&name={}\".format(genderAPI_key, query)))\n", " for r in data['result']:\n", " n = r[\"name\"]\n", " g = r[\"gender\"]\n", "\n", " if g != u\"unknown\":\n", " p = float(r[\"accuracy\"]) / 100\n", " c = r[\"samples\"]\n", " else:\n", " p = None\n", " c = 0\n", "\n", " genderAPI_dict[n] = {\"gender\":g, \"probability\":p, \"count\": c}\n", " \n", " \n", "with open(\"../data/pubs/genderAPI_genders.json\", \"w+\") as outfile:\n", " outfile.write(json.dumps(genderAPI_dict, indent=4))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you want to do this without going through this notebook and you have a python2 installation, you can use the included `gender_detection.py`. The first argument should be `genderize` or `genderapi` depending on which method you want to use, (or if nothing, it will try to use GenderDetector). The second argument should be a path to an output file (like `genders.json`), and then the rest of the arguments should be the csv files output from the previous notebook. The script will pull all the names together into a set, and then use the relevant API or GenderDetector.\n", "\n", "```\n", "$ python2 gender_detection.py genderize path/to/dataset1.csv path/to/dataset2.csv\n", "```" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.0" } }, "nbformat": 4, "nbformat_minor": 0 }