{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Exploring subdomains in the whole of gov.au\n", "\n", "

New to Jupyter notebooks? Try Using Jupyter notebooks for a quick introduction.

\n", "\n", "Most of the notebooks in this repository work with small slices of web archive data. In this notebook we'll scale things up a bit to try and find all of the subdomains that have existed in the `gov.au` domain. As in other notebooks, we'll obtain the data by querying the Internet Archive's CDX API. The only real difference is that it will take some hours to harvest all the data.\n", "\n", "All we're interested in this time are unique domain names, so to minimise the amount of data we'll be harvesting we can make use of the CDX API's `collapse` parameter. By setting `collapse=urlkey` we can tell the CDX API to drop records with duplicate `urlkey` values – this should mean we only get one capture per page. However, this only works if the capture records are in adjacent rows, so there probably will still be some duplicates. We'll also use the `fl` to limit the fields returned, and the `filter` parameter to limit results by `statuscode` and `mimetype`. So the parameters we'll use are:\n", "\n", "* `url=*.gov.au` – all of the pages in all of the subdomains under `gov.au`\n", "* `collapse=urlkey` – as few captures per page as possible\n", "* `filter=statuscode:200,mimetype:text/html` – only successful captures of HTML pages\n", "* `fl=urlkey,timestamp,original` – only these fields\n", "\n", "Even with these limits, the query will retrieve a LOT of data. To make the harvesting process easier to manage and more robust, I'm going to make use of the `requests-cache` module. This will capture the results of all requests, so that if things get interrupted and we have to restart, we can retrieve already harvested requests from the cache without downloading them again. We'll also write the harvested results directly to disk rather than consuming all our computer's memory. The file format will be the NDJSON (Newline Delineated JSON) format – because each line is a separate JSON object we can just write it a line at a time as the data is received.\n", "\n", "For a general approach to harvesting domain-level information from the IA CDX API see [Harvesting data about a domain using the IA CDX API](harvesting_domain_data.ipynb)\n" ] }, { "cell_type": "code", "execution_count": 142, "metadata": {}, "outputs": [], "source": [ "import requests\n", "from requests.adapters import HTTPAdapter\n", "from requests.packages.urllib3.util.retry import Retry\n", "from tqdm.auto import tqdm\n", "import pandas as pd\n", "import time\n", "from requests_cache import CachedSession\n", "import ndjson\n", "from pathlib import Path\n", "from slugify import slugify\n", "import arrow\n", "import json\n", "import re\n", "from newick import Node\n", "import newick\n", "from ete3 import Tree, TreeStyle\n", "import ipywidgets as widgets\n", "from IPython.display import display, HTML, FileLink\n", "\n", "s = CachedSession()\n", "retries = Retry(total=10, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])\n", "s.mount('https://', HTTPAdapter(max_retries=retries))\n", "s.mount('http://', HTTPAdapter(max_retries=retries))" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "domain = 'gov.au'" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def get_total_pages(params):\n", " '''\n", " Gets the total number of pages in a set of results.\n", " '''\n", " these_params = params.copy()\n", " these_params['showNumPages'] = 'true'\n", " response = s.get('http://web.archive.org/cdx/search/cdx', params=these_params, headers={'User-Agent': ''})\n", " return int(response.text)\n", "\n", "def prepare_params(url, **kwargs):\n", " '''\n", " Prepare the parameters for a CDX API requests.\n", " Adds all supplied keyword arguments as parameters (changing from_ to from).\n", " Adds in a few necessary parameters.\n", " '''\n", " params = kwargs\n", " params['url'] = url\n", " params['output'] = 'json'\n", " # CDX accepts a 'from' parameter, but this is a reserved word in Python\n", " # Use 'from_' to pass the value to the function & here we'll change it back to 'from'.\n", " if 'from_' in params:\n", " params['from'] = params['from_']\n", " del(params['from_'])\n", " return params\n", "\n", "def get_cdx_data(params):\n", " '''\n", " Make a request to the CDX API using the supplied parameters.\n", " Check the results for a resumption key, and return the key (if any) and the results.\n", " '''\n", " response = s.get('http://web.archive.org/cdx/search/cdx', params=params, headers={'User-Agent': ''})\n", " response.raise_for_status()\n", " results = response.json()\n", " if not response.from_cache:\n", " time.sleep(0.2)\n", " return results\n", "\n", "def convert_lists_to_dicts(results):\n", " if results:\n", " keys = results[0]\n", " results_as_dicts = [dict(zip(keys, v)) for v in results[1:]]\n", " else:\n", " results_as_dicts = results\n", " return results_as_dicts\n", "\n", "def get_cdx_data_by_page(url, **kwargs):\n", " page = 0\n", " params = prepare_params(url, **kwargs)\n", " total_pages = get_total_pages(params)\n", " # We'll use a timestamp to distinguish between versions\n", " timestamp = arrow.now().format('YYYYMMDDHHmmss')\n", " file_path = Path(f'{slugify(domain)}-cdx-data-{timestamp}.ndjson')\n", " # Remove any old versions of the data file\n", " try:\n", " file_path.unlink()\n", " except FileNotFoundError:\n", " pass\n", " with tqdm(total=total_pages-page) as pbar1:\n", " with tqdm() as pbar2:\n", " while page < total_pages:\n", " params['page'] = page\n", " results = get_cdx_data(params)\n", " with file_path.open('a') as f:\n", " writer = ndjson.writer(f, ensure_ascii=False)\n", " for result in convert_lists_to_dicts(results):\n", " writer.writerow(result)\n", " page += 1\n", " pbar1.update(1)\n", " pbar2.update(len(results) - 1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Note than harvesting a domain has the same number of pages (ie requests) no matter what filters are applied -- it's just that some pages will be empty.\n", "# So repeating a domain harvest with different filters will mean less data, but the same number of requests.\n", "# What's most efficient? I dunno.\n", "get_cdx_data_by_page(f'*.{domain}', filter=['statuscode:200', 'mimetype:text/html'], collapse='urlkey', fl='urlkey,timestamp,original', pageSize=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Process the harvested data\n", "\n", "After many hours, and many interruptions, the harvesting process finally finished. I ended up with a 65gb ndjson file. How many captures does it include?" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "189,639,944\n", "CPU times: user 1min 4s, sys: 27 s, total: 1min 31s\n", "Wall time: 2min 26s\n" ] } ], "source": [ "%%time\n", "count = 0\n", "with open('gov-au-cdx-data.ndjson') as f:\n", " for line in f:\n", " count += 1\n", "print(f'{count:,}') " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Find unique domains" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's get extract a list of unique domains from all of those page captures. In the code below we extract domains from the `urlkey` and add them to a list. After every 100,000 lines, we use `set` to remove duplicates from the list. This is an attempt to find a reasonable balance between speed and memory consumption." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "c6f2cf0ce77d4e7db04744a152cfdf2a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "CPU times: user 16min 7s, sys: 30.8 s, total: 16min 38s\n", "Wall time: 16min 40s\n" ] } ], "source": [ "%%time\n", "# This is slow, but will avoid eating up memory\n", "domains = []\n", "with open('gov-au-cdx-data.ndjson') as f:\n", " count = 0\n", " with tqdm() as pbar:\n", " for line in f:\n", " capture = json.loads(line)\n", " # Split the urlkey on ) to separate domain from path\n", " domain = capture['urlkey'].split(')')[0]\n", " # Remove port numbers\n", " domain = re.sub(r'\\:\\d+', '', domain)\n", " domains.append(domain)\n", " count += 1\n", " # Remove duplicates after every 100,000 lines to conserve memory\n", " if count > 100000:\n", " domains = list(set(domains))\n", " pbar.update(count)\n", " count = 0\n", "domains = list(set(domains))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How many unique domains are there?" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "26233" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(domains)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "df = pd.DataFrame(domains, columns=['urlkey'])\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Save the list of domains to a CSV file to save us having to extract them again." ] }, { "cell_type": "code", "execution_count": 144, "metadata": {}, "outputs": [ { "data": { "text/html": [ "domains/gov-au-unique-domains.csv
" ], "text/plain": [ "/Volumes/Workspace/mycode/glam-workbench/webarchives/notebooks/domains/gov-au-unique-domains.csv" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "df.to_csv('domains/gov-au-unique-domains.csv', index=False)\n", "display(FileLink('domains/gov-au-unique-domains.csv'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Reload the list of domains from the CSV if necessary." ] }, { "cell_type": "code", "execution_count": 155, "metadata": {}, "outputs": [], "source": [ "domains = pd.read_csv('domains/gov-au-unique-domains.csv')['urlkey'].to_list()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Number of unique urls per subdomain\n", "\n", "Now that we have a list of unique domains we can use this to generate a count of unique urls per subdomain. This won't be exact. As noted previously, even with `collapse` set to `urlkey` there are likely to be duplicate urls. Getting rid of all the duplicates in such a large file would require a fair bit of processing, and I'm not sure it's worth it at this point. We really just want a sense of how subdomains are actually used." ] }, { "cell_type": "code", "execution_count": 62, "metadata": {}, "outputs": [], "source": [ "# Create a dictionary with the domains as keys and the values set to zero\n", "domain_counts = dict(zip(domains, [0] * len(domains)))" ] }, { "cell_type": "code", "execution_count": 65, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8f7160c40a0c466089bf50a4db04f107", "version_major": 2, "version_minor": 0 }, "text/plain": [ "HBox(children=(FloatProgress(value=1.0, bar_style='info', max=1.0), HTML(value='')))" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "CPU times: user 16min 52s, sys: 34.9 s, total: 17min 27s\n", "Wall time: 17min 30s\n" ] } ], "source": [ "%%time\n", "# FIND NUMBER OF URLS PER DOMAIN\n", "# As above we'll go though the file line by line\n", "# but this time we'll extract the domain and increment the corresponding value in the dict.\n", "with open('gov-au-cdx-data.ndjson') as f:\n", " count = 0\n", " with tqdm() as pbar:\n", " for line in f:\n", " capture = json.loads(line)\n", " # Split the urlkey on ) to separate domain from path\n", " domain = capture['urlkey'].split(')')[0]\n", " domain = re.sub(r'\\:\\d+', '', domain)\n", " # Increment domain count\n", " domain_counts[domain] += 1\n", " count += 1\n", " # This is just to update the progress bar\n", " if count > 100000:\n", " pbar.update(count)\n", " count = 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convert to a dataframe\n", "\n", "We'll now convert the data to a dataframe and do a bit more processing." ] }, { "cell_type": "code", "execution_count": 70, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
urlkeynumber_of_pages
0au,gov,qld,justice,mogservices6
1au,gov,health,business9
2au,gov,qld,sasvrc173
3au,gov,qld,qfes,dmlms4
4au,gov,wa,kwinana,maps4
\n", "
" ], "text/plain": [ " urlkey number_of_pages\n", "0 au,gov,qld,justice,mogservices 6\n", "1 au,gov,health,business 9\n", "2 au,gov,qld,sasvrc 173\n", "3 au,gov,qld,qfes,dmlms 4\n", "4 au,gov,wa,kwinana,maps 4" ] }, "execution_count": 70, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Reshape dict as a list of dicts\n", "domain_counts_as_list = [{'urlkey': k, 'number_of_pages': v} for k, v in domain_counts.items()]\n", "\n", "# Convert to dataframe\n", "df_counts = pd.DataFrame(domain_counts_as_list)\n", "df_counts.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we're going to split the `urlkey` into its separate subdomains." ] }, { "cell_type": "code", "execution_count": 82, "metadata": {}, "outputs": [], "source": [ "# Split the urlkey on commas into separate columns -- this creates a new df\n", "df_split = df_counts['urlkey'].str.split(',', expand=True)\n", "\n", "# Merge the new df back with the original so we have both the urlkey and it's components\n", "df_merged = pd.merge(df_counts, df_split, left_index=True, right_index=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we'll stich the subdomains back together in a traditional domain format just for readability." ] }, { "cell_type": "code", "execution_count": 83, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
urlkeynumber_of_pages0123456789domain
0au,gov,qld,justice,mogservices6augovqldjusticemogservicesNoneNoneNoneNoneNonemogservices.justice.qld.gov.au
1au,gov,health,business9augovhealthbusinessNoneNoneNoneNoneNoneNonebusiness.health.gov.au
2au,gov,qld,sasvrc173augovqldsasvrcNoneNoneNoneNoneNoneNonesasvrc.qld.gov.au
3au,gov,qld,qfes,dmlms4augovqldqfesdmlmsNoneNoneNoneNoneNonedmlms.qfes.qld.gov.au
4au,gov,wa,kwinana,maps4augovwakwinanamapsNoneNoneNoneNoneNonemaps.kwinana.wa.gov.au
\n", "
" ], "text/plain": [ " urlkey number_of_pages 0 1 2 3 \\\n", "0 au,gov,qld,justice,mogservices 6 au gov qld justice \n", "1 au,gov,health,business 9 au gov health business \n", "2 au,gov,qld,sasvrc 173 au gov qld sasvrc \n", "3 au,gov,qld,qfes,dmlms 4 au gov qld qfes \n", "4 au,gov,wa,kwinana,maps 4 au gov wa kwinana \n", "\n", " 4 5 6 7 8 9 domain \n", "0 mogservices None None None None None mogservices.justice.qld.gov.au \n", "1 None None None None None None business.health.gov.au \n", "2 None None None None None None sasvrc.qld.gov.au \n", "3 dmlms None None None None None dmlms.qfes.qld.gov.au \n", "4 maps None None None None None maps.kwinana.wa.gov.au " ] }, "execution_count": 83, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def join_domain(x):\n", " parts = x.split(',')\n", " parts.reverse()\n", " return '.'.join(parts)\n", "\n", "df_merged['domain'] = df_merged['urlkey'].apply(join_domain)\n", "df_merged.head()" ] }, { "cell_type": "code", "execution_count": 145, "metadata": {}, "outputs": [ { "data": { "text/html": [ "domains/gov-au-domains-split.csv
" ], "text/plain": [ "/Volumes/Workspace/mycode/glam-workbench/webarchives/notebooks/domains/gov-au-domains-split.csv" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "df_merged.to_csv('domains/gov-au-domains-split.csv')\n", "display(FileLink('domains/gov-au-domains-split.csv'))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's count things!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How many third level domains are there?" ] }, { "cell_type": "code", "execution_count": 89, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1720" ] }, "execution_count": 89, "metadata": {}, "output_type": "execute_result" } ], "source": [ "len(pd.unique(df_merged[2]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which third level domains have the most subdomains?" ] }, { "cell_type": "code", "execution_count": 90, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "nsw 7477\n", "vic 3418\n", "qld 2772\n", "wa 2690\n", "sa 1719\n", "tas 957\n", "nt 752\n", "act 362\n", "embassy 151\n", "nla 138\n", "govspace 111\n", "deewr 77\n", "ga 75\n", "treasury 74\n", "ato 73\n", "health 73\n", "dest 69\n", "abs 61\n", "govcms 60\n", "bom 59\n", "Name: 2, dtype: int64" ] }, "execution_count": 90, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_merged[2].value_counts()[:20]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Which domains have the most unique pages?" ] }, { "cell_type": "code", "execution_count": 91, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
domain number_of_pages
24311trove.nla.gov.au9,285,603
8551nla.gov.au2,592,182
17232collectionsearch.nma.gov.au2,422,514
4551passwordreset.parliament.qld.gov.au2,089,256
18817parlinfo.aph.gov.au1,882,646
2050aph.gov.au1,731,559
11539bmcc.nsw.gov.au1,414,711
18038jobsearch.gov.au1,293,760
4556arpansa.gov.au1,278,603
22182abs.gov.au961,526
1844libero.gtcc.nsw.gov.au959,490
24888canterbury.nsw.gov.au956,500
20982library.campbelltown.nsw.gov.au932,933
9451defencejobs.gov.au894,770
18377webopac.gosford.nsw.gov.au854,395
3162library.lachlan.nsw.gov.au838,972
6141library.shoalhaven.nsw.gov.au800,541
16750catalogue.nla.gov.au787,616
25461library.bankstown.nsw.gov.au767,550
14964myagedcare.gov.au759,384
" ], "text/plain": [ "" ] }, "execution_count": 91, "metadata": {}, "output_type": "execute_result" } ], "source": [ "top_20 = df_merged[['domain', 'number_of_pages']].sort_values(by='number_of_pages', ascending=False)[:20]\n", "top_20.style.format({'number_of_pages': '{:,}'})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Are there really domains made up of 10 levels?" ] }, { "cell_type": "code", "execution_count": 93, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['0-slwa.csiro.patron.eb20.com.henrietta.slwa.wa.gov.au',\n", " 'test-your-tired-self-prod.apps.p.dmp.aws.hosting.transport.nsw.gov.au',\n", " '0-www.library.eb.com.au.henrietta.slwa.wa.gov.au']" ] }, "execution_count": 93, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df_merged.loc[df_merged[9].notnull()]['domain'].to_list()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's visualise things!\n", "\n", "I thought it would be interesting to try and visualise all the subdomains as a circular dendrogram. After a bit of investigation I discovered the [ETE Toolkit](http://etetoolkit.org/) for the visualisation of phylogenetic trees – it seemed perfect. But to get data into ETE I first had to convert it into a [Newick formatted](https://en.wikipedia.org/wiki/Newick_format) string. Fortunately, there's a [Python package](https://pypi.org/project/newick/) for that.\n", "\n", "Warning! While the code below will indeed generate circular dendrograms from a domain name hierarchy, if you have more than a few hundred domains you'll find that the image gets very big, very quickly. I successfully saved the whole of the `gov.au` domain as a 32mb SVG file, which you can (very slowly) view in a web browser or graphics program. But any attempt to save into another image format at a size that would make the text readable consumed huge amounts of memory and forced me to pull the plug." ] }, { "cell_type": "code", "execution_count": 150, "metadata": {}, "outputs": [], "source": [ "def make_domain_tree(domains):\n", " '''\n", " Converts a list of urlkeys into a Newick tree via nodes.\n", " '''\n", " d_tree = Node()\n", " for domain in domains:\n", " domain = re.sub(r'\\:\\d+', '', domain)\n", " sds = domain.split(',')\n", " for i, sd in enumerate(sds):\n", " parent = '.'.join(reversed(sds[0:i])) if i > 0 else None\n", " label = '.'.join(reversed(sds[:i+1]))\n", " if not d_tree.get_node(label):\n", " if parent:\n", " d_tree.get_node(parent).add_descendant(Node(label))\n", " else:\n", " d_tree.add_descendant(Node(label))\n", " return newick.dumps(d_tree)" ] }, { "cell_type": "code", "execution_count": 156, "metadata": {}, "outputs": [], "source": [ "# Convert domains to a Newick tree\n", "full_tree = make_domain_tree(domains)" ] }, { "cell_type": "code", "execution_count": 152, "metadata": {}, "outputs": [], "source": [ "def save_dendrogram_to_file(tree, width, output_file):\n", " t = Tree(tree, format=1)\n", " circular_style = TreeStyle()\n", " circular_style.mode = \"c\" # draw tree in circular mode\n", " circular_style.optimal_scale_level = 'full'\n", " circular_style.root_opening_factor = 0\n", " circular_style.show_scale = False\n", " t.render(output_file, w=width, tree_style=circular_style)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First let's play safe by creating a PNG with a fixed width." ] }, { "cell_type": "code", "execution_count": 158, "metadata": {}, "outputs": [], "source": [ "# Saving a PNG with a fixed width will work, but you won't be able to read any text\n", "save_dendrogram_to_file(full_tree, 1000, 'images/govau-all-1000.png')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's the result!\n", "\n", "![Circular dendrogram of all gov.au domains](images/govau-all-1000.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This will save a zoomable SVG version that allows you to read the labels, but it will be very slow to use, and difficult to convert into other formats." ] }, { "cell_type": "code", "execution_count": 159, "metadata": {}, "outputs": [], "source": [ "# Here be dendrodragons!\n", "# I don't think width does anything if you save to SVG\n", "save_dendrogram_to_file(full_tree, 5000, 'govau-all.svg')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try some third level domains." ] }, { "cell_type": "code", "execution_count": 164, "metadata": {}, "outputs": [], "source": [ "def display_dendrogram(label, level=2, df=df_merged, width=300):\n", " domains = df.loc[df[2] == label]['urlkey'].to_list()\n", " tree = make_domain_tree(domains)\n", " save_dendrogram_to_file(tree, width, f'images/{label}-domains-{width}.png')\n", " return f'

{label.upper()}

'" ] }, { "cell_type": "code", "execution_count": 165, "metadata": {}, "outputs": [ { "data": { "text/html": [ "

NSW

VIC

QLD

SA

WA

TAS

NT

ACT

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Create dendrograms for each state/territory\n", "html = ''\n", "for state in ['nsw', 'vic', 'qld', 'sa', 'wa', 'tas', 'nt', 'act']:\n", " html += display_dendrogram(state)\n", "\n", "display(HTML(html))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there are fewer domains you can see more detail." ] }, { "cell_type": "code", "execution_count": 141, "metadata": {}, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "act = display_dendrogram(state, width=8000)\n", "display(HTML(act))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "----\n", "Created by [Tim Sherratt](https://timsherratt.org) for the [GLAM Workbench](https://glam-workbench.github.io).\n", "\n", "Work on this notebook was supported by the [IIPC Discretionary Funding Programme 2019-2020](http://netpreserve.org/projects/)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }