{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Splink data linking demo (link only)\n", "\n", "In this demo we link two small datasets. \n", "\n", "The larger table contains duplicates, but in this notebook we use the `link_only` setting, so `splink` makes no attempt to deduplicate these records. \n", "\n", "Note it is possible to simultaneously link and dedupe using the `link_and_dedupe` setting.\n", "\n", "**Important** Where deduplication is not required, `link_only` can provide an important performance boost by dramatically reducing the number of records which need to be compared.\n", "\n", "For example, if you wanted to link 10 records to 1,000, then the maximum number of comparisons that need to be made (i.e. with no blocking rules) is 10,000. If you need to dedupe as well, that number would be n(n-1)/2 = 509,545.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1: Imports and setup\n", "\n", "The following is just boilerplate code that sets up the Spark session and sets some other non-essential configuration options" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd \n", "pd.options.display.max_columns = 500\n", "pd.options.display.max_rows = 100\n", "import altair as alt\n", "alt.renderers.enable('mimetype')" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import logging \n", "logging.basicConfig() # Means logs will print in Jupyter Lab\n", "\n", "# Set to DEBUG if you want splink to log the SQL statements it's executing under the hood\n", "logging.getLogger(\"splink\").setLevel(logging.INFO)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from utility_functions.demo_utils import get_spark\n", "spark = get_spark()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2: Read in the data\n", "\n", "In this example, we link two datasets, but you can link as many as you like.\n", "\n", "⚠️ Note that `splink` makes the following assumptions about your data:\n", "\n", "- There is a field containing a unique record identifier in each dataset. By default, this should be called `unique_id`, but you can change this in the settings\n", "- There is a field containing a dataset name in each dataset, to disambiguate the `unique_id` column if the same id values occur in more than one dataset. By default, this column is called `source_dataset`, but you can change this in the settings.\n", "- The two datasets being linked have common column names - e.g. date of birth is represented in both datasets in a field of the same name. In many cases, this means that the user needs to rename columns prior to using `splink`\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The count of rows in `df_1` is 181\n", "+---------+----------+-------+----------+------------+--------------------+-----+--------------+\n", "|unique_id|first_name|surname| dob| city| email|group|source_dataset|\n", "+---------+----------+-------+----------+------------+--------------------+-----+--------------+\n", "| 0| Julia | null|2015-10-29| London| hannah88@powers.com| 0| df_1|\n", "| 4| oNah| Watson|2008-03-23| Bolton|matthew78@ballard...| 1| df_1|\n", "| 13| Molly | Bell|2002-01-05|Peterborough| null| 2| df_1|\n", "| 15| Alexander|Amelia |1983-05-19| Glasgow|ic-mpbell@alleale...| 3| df_1|\n", "| 20| Ol vri|ynnollC|1972-03-08| Plymouth|derekwilliams@nor...| 4| df_1|\n", "+---------+----------+-------+----------+------------+--------------------+-----+--------------+\n", "only showing top 5 rows\n", "\n", "The count of rows in `df_2` is 819\n", "+---------+----------+-------+----------+------+--------------------+-----+--------------+\n", "|unique_id|first_name|surname| dob| city| email|group|source_dataset|\n", "+---------+----------+-------+----------+------+--------------------+-----+--------------+\n", "| 1| Julia | Taylor|2015-07-31|London| hannah88@powers.com| 0| df_2|\n", "| 2| Julia | Taylor|2016-01-27|London| hannah88@powers.com| 0| df_2|\n", "| 3| Julia | Taylor|2015-10-29| null| hannah88opowersc@m| 0| df_2|\n", "| 5| Noah | Watson|2008-03-23|Bolton|matthew78@ballard...| 1| df_2|\n", "| 6| Watson| Noah |2008-03-23| null|matthew78@ballard...| 1| df_2|\n", "+---------+----------+-------+----------+------+--------------------+-----+--------------+\n", "only showing top 5 rows\n", "\n" ] } ], "source": [ "from pyspark.sql.functions import lit \n", "df_1 = spark.read.parquet(\"data/fake_df_l.parquet\")\n", "df_1 = df_1.withColumn(\"source_dataset\", lit(\"df_1\"))\n", "df_2 = spark.read.parquet(\"data/fake_df_r.parquet\")\n", "df_2 = df_2.withColumn(\"source_dataset\", lit(\"df_2\"))\n", "print(f\"The count of rows in `df_1` is {df_1.count()}\")\n", "df_1.show(5)\n", "print(f\"The count of rows in `df_2` is {df_2.count()}\")\n", "df_2.show(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3: Configure splink using the `settings` object\n", "\n", "Most of `splink` configuration options are stored in a settings dictionary. This dictionary allows significant customisation, and can therefore get quite complex. \n", "\n", "💥 We provide an tool for helping to author valid settings dictionaries, which includes tooltips and autocomplete, which you can find [here](http://robinlinacre.com/splink_settings_editor/).\n", "\n", "Customisation overrides default values built into splink. For the purposes of this demo, we will specify a simple settings dictionary, which means we will be relying on these sensible defaults.\n", "\n", "To help with authoring and validation of the settings dictionary, we have written a [json schema](https://json-schema.org/), which can be found [here](https://github.com/moj-analytical-services/splink/blob/master/splink/files/settings_jsonschema.json). \n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# The comparison expression allows for the case where a first name and surname have been inverted \n", "sql_case_expression = \"\"\"\n", "CASE \n", "WHEN first_name_l = first_name_r AND surname_l = surname_r THEN 4 \n", "WHEN first_name_l = surname_r AND surname_l = first_name_r THEN 3\n", "WHEN first_name_l = first_name_r THEN 2\n", "WHEN surname_l = surname_r THEN 1\n", "ELSE 0 \n", "END\n", "\"\"\"\n", "\n", "settings = {\n", " \"link_type\": \"link_only\", \n", " \"max_iterations\": 20,\n", " \"blocking_rules\": [\n", " ],\n", " \"comparison_columns\": [\n", " {\n", " \"custom_name\": \"name_inversion\",\n", " \"custom_columns_used\": [\"first_name\", \"surname\"],\n", " \"case_expression\": sql_case_expression,\n", " \"num_levels\": 5\n", " },\n", " {\n", " \"col_name\": \"city\",\n", " \"num_levels\": 3\n", " },\n", " {\n", " \"col_name\": \"email\",\n", " \"num_levels\": 3\n", " },\n", " {\n", " \"col_name\": \"dob\"\n", " }\n", " ],\n", " \"additional_columns_to_retain\": [\"group\"]\n", " \n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In words, this setting dictionary says:\n", "\n", "- We are performing a data linking task (the other options are `dedupe_only`, or `link_and_dedupe`)\n", "- Since the input datasets are so small, we do not specify any blocking rules and instead generate all possible comparisons.\n", "- When comparing records, we will use information from the `first_name`, `surname`, `city` and `email` columns to compute a match score.\n", "- For the comparisons on the `first_name` and `surname` column we allow the possibility that the names have been inputted in the wrong order. \n", " - The highest level of similarity is that both `first_name` and `surname` both match.\n", " - There are other levels of similarity for the names being inverted, and just first name, or just surname matching.\n", "- We will retain the `group` column in the results even though this is not used as part of comparisons. This is a labelled dataset and `group` contains the true match - i.e. where group matches, the records pertain to the same person" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4: Estimate match scores using the Expectation Maximisation algorithm" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/robinlinacre/anaconda3/lib/python3.8/site-packages/splink/default_settings.py:199: UserWarning: You have not specified any blocking rules, meaning all comparisons between the input dataset(s) will be generated and blocking will not be used.For large input datasets, this will generally be computationally intractable because it will generate comparisons equal to the number of rows squared.\n", " warnings.warn(\n", "INFO:splink.iterate:Iteration 0 complete\n", "INFO:splink.model:The maximum change in parameters was 0.40568520724773405 for key name_inversion, level 4\n", "INFO:splink.iterate:Iteration 1 complete\n", "INFO:splink.model:The maximum change in parameters was 0.06933289766311646 for key email, level 1\n", "INFO:splink.iterate:Iteration 2 complete\n", "INFO:splink.model:The maximum change in parameters was 0.02503591775894165 for key dob, level 0\n", "INFO:splink.iterate:Iteration 3 complete\n", "INFO:splink.model:The maximum change in parameters was 0.009511321783065796 for key dob, level 0\n", "INFO:splink.iterate:Iteration 4 complete\n", "INFO:splink.model:The maximum change in parameters was 0.004227638244628906 for key dob, level 0\n", "INFO:splink.iterate:Iteration 5 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0022344589233398438 for key dob, level 0\n", "INFO:splink.iterate:Iteration 6 complete\n", "INFO:splink.model:The maximum change in parameters was 0.001312553882598877 for key dob, level 1\n", "INFO:splink.iterate:Iteration 7 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0008212625980377197 for key dob, level 0\n", "INFO:splink.iterate:Iteration 8 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0005371570587158203 for key dob, level 0\n", "INFO:splink.iterate:Iteration 9 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0003641173243522644 for key city, level 0\n", "INFO:splink.iterate:Iteration 10 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0002571418881416321 for key city, level 0\n", "INFO:splink.iterate:Iteration 11 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0001854151487350464 for key city, level 0\n", "INFO:splink.iterate:Iteration 12 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0001360774040222168 for key city, level 0\n", "INFO:splink.iterate:Iteration 13 complete\n", "INFO:splink.model:The maximum change in parameters was 0.0001013725996017456 for key city, level 0\n", "INFO:splink.iterate:Iteration 14 complete\n", "INFO:splink.model:The maximum change in parameters was 7.649511098861694e-05 for key city, level 0\n", "INFO:splink.iterate:EM algorithm has converged\n" ] }, { "data": { "text/plain": [ "DataFrame[match_probability: double, source_dataset_l: string, unique_id_l: bigint, source_dataset_r: string, unique_id_r: bigint, first_name_l: string, first_name_r: string, surname_l: string, surname_r: string, gamma_name_inversion: int, city_l: string, city_r: string, gamma_city: int, email_l: string, email_r: string, gamma_email: int, dob_l: string, dob_r: string, gamma_dob: int, group_l: bigint, group_r: bigint]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from splink import Splink\n", "\n", "linker = Splink(settings, [df_1, df_2], spark)\n", "df_e = linker.get_scored_comparisons()\n", "\n", "# Later, we will make term frequency adjustments. \n", "# Persist caches these results in memory, preventing them having to be recomputed when we make these adjustments.\n", "df_e.persist() \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 5: Inspect results \n", "\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | match_probability | \n", "source_dataset_l | \n", "unique_id_l | \n", "source_dataset_r | \n", "unique_id_r | \n", "first_name_l | \n", "first_name_r | \n", "surname_l | \n", "surname_r | \n", "gamma_name_inversion | \n", "city_l | \n", "city_r | \n", "gamma_city | \n", "email_l | \n", "email_r | \n", "gamma_email | \n", "dob_l | \n", "dob_r | \n", "gamma_dob | \n", "group_l | \n", "group_r | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
58499 | \n", "1.0 | \n", "df_1 | \n", "419 | \n", "df_2 | \n", "422 | \n", "Emily | \n", "Brown | \n", "Brown | \n", "Emily | \n", "3 | \n", "Lndon | \n", "London | \n", "1 | \n", "sarahbrown@mckinney.com | \n", "sarahnron@mckinbey.com | \n", "1 | \n", "2005-07-15 | \n", "2005-07-15 | \n", "1 | \n", "71 | \n", "71 | \n", "
79930 | \n", "1.0 | \n", "df_1 | \n", "581 | \n", "df_2 | \n", "585 | \n", "Eleanor | \n", "Shaw | \n", "Shaw | \n", "Eleanor | \n", "3 | \n", "Birmingham | \n", "Birmingha | \n", "1 | \n", "stephaniewebbhart.net | \n", "stephaniewebb@hart.net | \n", "1 | \n", "1979-03-31 | \n", "1979-03-31 | \n", "1 | \n", "97 | \n", "97 | \n", "
93101 | \n", "1.0 | \n", "df_1 | \n", "664 | \n", "df_2 | \n", "668 | \n", "Ivy | \n", "Taylor | \n", "Taylor | \n", "Ivy | \n", "3 | \n", "Lonon | \n", "London | \n", "1 | \n", "jonesjennmfer@pitt.coi | \n", "jonesjennifer@pitts.com | \n", "1 | \n", "1980-01-13 | \n", "1980-01-13 | \n", "1 | \n", "113 | \n", "113 | \n", "
93106 | \n", "1.0 | \n", "df_1 | \n", "664 | \n", "df_2 | \n", "673 | \n", "Ivy | \n", "Taylor | \n", "Taylor | \n", "Ivy | \n", "3 | \n", "Lonon | \n", "London | \n", "1 | \n", "jonesjennmfer@pitt.coi | \n", "jonesjennifer@pitts.com | \n", "1 | \n", "1980-01-13 | \n", "1980-01-13 | \n", "1 | \n", "113 | \n", "113 | \n", "
2471 | \n", "1.0 | \n", "df_1 | \n", "15 | \n", "df_2 | \n", "18 | \n", "Alexander | \n", "Amelia | \n", "Amelia | \n", "Alexander | \n", "3 | \n", "Glasgow | \n", "Glasgow | \n", "2 | \n", "ic-mpbell@allealewis.org | \n", "icampbell@allen-lewis.org | \n", "1 | \n", "1983-05-19 | \n", "1983-05-19 | \n", "1 | \n", "3 | \n", "3 | \n", "
137531 | \n", "1.0 | \n", "df_1 | \n", "924 | \n", "df_2 | \n", "926 | \n", "Mills | \n", "Thomas | \n", "Thomas | \n", "Mills | \n", "3 | \n", "London | \n", "London | \n", "2 | \n", "hensondebbie@garcia.com | \n", "hensondrbbie@gaeia.com | \n", "1 | \n", "1970-03-09 | \n", "1970-03-09 | \n", "1 | \n", "167 | \n", "167 | \n", "
79105 | \n", "1.0 | \n", "df_1 | \n", "574 | \n", "df_2 | \n", "578 | \n", "George | \n", "Williams | \n", "Williams | \n", "George | \n", "3 | \n", "London | \n", "London | \n", "2 | \n", "desek58gibbr.biz | \n", "derek58@gibbs.biz | \n", "1 | \n", "1981-08-06 | \n", "1981-08-06 | \n", "1 | \n", "96 | \n", "96 | \n", "
79104 | \n", "1.0 | \n", "df_1 | \n", "574 | \n", "df_2 | \n", "577 | \n", "George | \n", "Williams | \n", "Williams | \n", "George | \n", "3 | \n", "London | \n", "London | \n", "2 | \n", "desek58gibbr.biz | \n", "derek58@gibbs.biz | \n", "1 | \n", "1981-08-06 | \n", "1981-08-06 | \n", "1 | \n", "96 | \n", "96 | \n", "
142479 | \n", "1.0 | \n", "df_1 | \n", "960 | \n", "df_2 | \n", "966 | \n", "Gabriel | \n", "Bartlett | \n", "Bartlett | \n", "Gabriel | \n", "3 | \n", "Wolverhampton | \n", "Wolverhampton | \n", "2 | \n", "ogomez@robinson-mckinney.com | \n", "ogomez@rob-nsonimcknney.com | \n", "1 | \n", "1973-12-09 | \n", "1973-12-09 | \n", "1 | \n", "173 | \n", "173 | \n", "
29657 | \n", "1.0 | \n", "df_1 | \n", "209 | \n", "df_2 | \n", "210 | \n", "Thompson | \n", "Freddie | \n", "Freddie | \n", "Thompson | \n", "3 | \n", "Peterborough | \n", "Peterborough | \n", "2 | \n", "scottsalinas@hughes-lopez.com | \n", "scottsalinah@ughes-lopez.com | \n", "1 | \n", "1999-07-23 | \n", "1999-07-23 | \n", "1 | \n", "36 | \n", "36 | \n", "
73322 | \n", "1.0 | \n", "df_1 | \n", "517 | \n", "df_2 | \n", "521 | \n", "Brown | \n", "Martha | \n", "Martha | \n", "Brown | \n", "3 | \n", "Southend-on-Sea | \n", "Southend-on-Sea | \n", "2 | \n", "watsonthomas@jones-stuart.biz | \n", "watsonthomas@onesistuart.b-z | \n", "1 | \n", "2002-09-01 | \n", "2002-09-01 | \n", "1 | \n", "89 | \n", "89 | \n", "
73327 | \n", "1.0 | \n", "df_1 | \n", "517 | \n", "df_2 | \n", "526 | \n", "Brown | \n", "Martha | \n", "Martha | \n", "Brown | \n", "3 | \n", "Southend-on-Sea | \n", "Southend-on-Sea | \n", "2 | \n", "watsonthomas@jones-stuart.biz | \n", "watsonthomas@jones-s.urttbiz | \n", "1 | \n", "2002-09-01 | \n", "2002-09-01 | \n", "1 | \n", "89 | \n", "89 | \n", "
102976 | \n", "1.0 | \n", "df_1 | \n", "726 | \n", "df_2 | \n", "727 | \n", "Harry | \n", "Lawrence | \n", "Lawrence | \n", "Harry | \n", "3 | \n", "Stoke-on-Trent | \n", "Stoke-on-Trent | \n", "2 | \n", "aarbarpace@mbnning.org | \n", "barbarapace@manning.org | \n", "1 | \n", "2016-12-25 | \n", "2016-12-25 | \n", "1 | \n", "125 | \n", "125 | \n", "
93102 | \n", "1.0 | \n", "df_1 | \n", "664 | \n", "df_2 | \n", "669 | \n", "Ivy | \n", "Ivy | \n", "Taylor | \n", "Taylor | \n", "4 | \n", "Lonon | \n", "Lodno | \n", "1 | \n", "jonesjennmfer@pitt.coi | \n", "jonesjennifer@pitts.com | \n", "1 | \n", "1980-01-13 | \n", "1980-01-13 | \n", "1 | \n", "113 | \n", "113 | \n", "
79102 | \n", "1.0 | \n", "df_1 | \n", "574 | \n", "df_2 | \n", "575 | \n", "George | \n", "George | \n", "Williams | \n", "Williams | \n", "4 | \n", "London | \n", "Lndon | \n", "1 | \n", "desek58gibbr.biz | \n", "derek58@gibbs.biz | \n", "1 | \n", "1981-08-06 | \n", "1981-08-06 | \n", "1 | \n", "96 | \n", "96 | \n", "
93100 | \n", "1.0 | \n", "df_1 | \n", "664 | \n", "df_2 | \n", "667 | \n", "Ivy | \n", "Ivy | \n", "Taylor | \n", "Taylor | \n", "4 | \n", "Lonon | \n", "London | \n", "1 | \n", "jonesjennmfer@pitt.coi | \n", "jonesjennifer@pitts.com | \n", "1 | \n", "1980-01-13 | \n", "1980-01-13 | \n", "1 | \n", "113 | \n", "113 | \n", "
102979 | \n", "1.0 | \n", "df_1 | \n", "726 | \n", "df_2 | \n", "730 | \n", "Harry | \n", "Harry | \n", "Lawrence | \n", "Lawrence | \n", "4 | \n", "Stoke-on-Trent | \n", "Stoke-on-ernt | \n", "1 | \n", "aarbarpace@mbnning.org | \n", "barbarapace@manning.org | \n", "1 | \n", "2016-12-25 | \n", "2016-12-25 | \n", "1 | \n", "125 | \n", "125 | \n", "
128490 | \n", "1.0 | \n", "df_1 | \n", "879 | \n", "df_2 | \n", "883 | \n", "Leo | \n", "Leo | \n", "Jones | \n", "Jones | \n", "4 | \n", "Ldnon | \n", "London | \n", "1 | \n", "tcarr@lewis-kline.com | \n", "tcarr@lweis-kine.com | \n", "1 | \n", "2019-06-15 | \n", "2019-06-15 | \n", "1 | \n", "156 | \n", "156 | \n", "
79934 | \n", "1.0 | \n", "df_1 | \n", "581 | \n", "df_2 | \n", "589 | \n", "Eleanor | \n", "Eleanor | \n", "Shaw | \n", "Shaw | \n", "4 | \n", "Birmingham | \n", "Birmingham | \n", "2 | \n", "stephaniewebbhart.net | \n", "stephaniewebb@hart.net | \n", "1 | \n", "1979-03-31 | \n", "1979-03-31 | \n", "1 | \n", "97 | \n", "97 | \n", "
79103 | \n", "1.0 | \n", "df_1 | \n", "574 | \n", "df_2 | \n", "576 | \n", "George | \n", "George | \n", "Williams | \n", "Williams | \n", "4 | \n", "London | \n", "London | \n", "2 | \n", "desek58gibbr.biz | \n", "derek58@gibbs.biz | \n", "1 | \n", "1981-08-06 | \n", "1981-08-06 | \n", "1 | \n", "96 | \n", "96 | \n", "