{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Harvest items from a search in RecordSearch\n", "\n", "Ever searched for items in RecordSearch and wanted to save the results as a CSV file, or in some other machine-readable format? This notebook makes it easy to save the results of an item search as a downloadable dataset. You can even download all the images from items that have been digitised, or save the complete files as PDFs!\n", "\n", "RecordSearch doesn't currently have an option for downloading machine-readable data. So to get collection metadata in a structured form, we have to resort of screen-scraping. This notebook uses the [RecordSearch Data Scraper](https://wragge.github.io/recordsearch_data_scraper/) to do most of the work.\n", "\n", "Notes:\n", "\n", "* The RecordSearch Data Scraper caches results to improve efficiency. This also makes it easy to resume a failed harvest. If you want to completely refresh a harvest, then delete the `cache_db.sqlite` file to start from scratch.\n", "* The harvesting function below automatically slices large searches (greater than 20,000 results) into smaller chunks. This avoids RecordSearch's 20,000 result limit. This should work in most cases. If it doesn't, try changing the `control_range` list below. This list supplies a range of prefixes which are supplied (with a trailing '*' for wildcard matches) as the `control` value." ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## Available search parameters\n", "\n", "The available search parameters are the same as those in RecordSearch's Advanced Search form. There's lots of them, but you'll probably only end up using a few like `kw` and `series`. Note that you can use \\* for wildcard searches as you can in the web interface. So setting `kw` to 'wragge\\*' will find both 'wragge' and 'wragges'.\n", "\n", "See the [RecordSearch Data Scraper documentation](https://wragge.github.io/recordsearch_data_scraper/scrapers.html#RSItemSearch) for more information on search parameters.\n", "\n", "* `kw` – string containing keywords to search for\n", "* `kw_options` – how to interpret `kw`, possible values are:\n", " * 'ALL' – return results containing all of the keywords (default)\n", " * 'ANY' – return results containg any of the keywords\n", " * 'EXACT' – treat `kw` as a phrase rather than a list of words\n", "* `kw_exclude` – string containing keywords to exclude from search\n", "* `kw_exclude_options` – how to interpret `kw_exclude`, possible values are:\n", " * 'ALL' – exclude results containing all of the keywords (default)\n", " * 'ANY' – exclude results containg any of the keywords\n", " * 'EXACT' – treat `kw_exact` as a phrase rather than a list of words\n", "* `search_notes` – set to 'on' to search item notes as well as metadata\n", "* `series` – search for items in this series\n", "* `series_exclude` – exclude items from this series\n", "* `control` – search for items matching this control symbol\n", "* `control_exclude` – exclude items matching this control symbol\n", "* `item_id` – search for items with this item ID number (formerly called `barcode`)\n", "* `date_from` – search for items with a date (year) greater than or equal to this, eg. '1935'\n", "* `date_to` – search for items with a date (year) less than or equal to this\n", "* `formats` – limit search to items in a particular format, see possible values below\n", "* `formats_exclude` – exclude items in a particular format, see possible values below\n", "* `locations` – limit search to items held in a particular location, see possible values below\n", "* `locations_exclude` – exclude items held in a particular location, see possible values below\n", "* `access` – limit to items with a particular access status, see possible values below\n", "* `access_exclude` – exclude items with a particular access status, see possible values below\n", "* `digital` – set to `True` to limit to items that are digitised\n", "\n", "\n", "Possible values for `formats` and `formats_exclude`: \n", "\n", "* 'Paper files and documents'\n", "* 'Index cards'\n", "* 'Bound volumes'\n", "* 'Cartographic records'\n", "* 'Photographs'\n", "* 'Microforms'\n", "* 'Audio-visual records'\n", "* 'Audio records'\n", "* 'Electronic records'\n", "* '3-dimensional records'\n", "* 'Scientific specimens'\n", "* 'Textiles'\n", "\n", "Possible values for `locations` and `locations_exclude`:\n", "\n", "* 'NAT, ACT'\n", "* 'Adelaide'\n", "* 'Australian War Memorial'\n", "* 'Brisbane'\n", "* 'Darwin'\n", "* 'Hobart'\n", "* 'Melbourne'\n", "* 'Perth'\n", "* 'Sydney'\n", "\n", "Possible values for `access` and `access_exclude`:\n", "\n", "* 'OPEN'\n", "* 'OWE'\n", "* 'CLOSED'\n", "* 'NYE'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are some additional parameters that affect the way the search results are delivered.\n", "\n", "* `record_detail` – controls the amount of information included in each item record, possible values:\n", " * 'brief' (default) – just the info in the search results\n", " * 'digitised' – add the number of pages if the file is digitised (slower)\n", " * 'full' – get the full individual record for each result, includes number of digitised pages and access examination details (slowest)\n", " \n", "Note that if you want to harvest all the digitised page images from a search, you need to set `record_detail` to either 'digitised' or 'full'." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## How your harvest is saved\n", "\n", "Once it's downloaded all the results, the harvesting function creates a directory for the harvest and saves three files inside:\n", "\n", "* `metadata.json` – this is a summary of your harvest, including the parameters you used and the date it was run\n", "* `results.ndjson` – this is the harvested data with each record saved as a JSON object on a new line\n", "* `results.csv` – the harvested data with any duplicates removed saved as a CSV file (if you've saved 'full' records, the list of `access_decision_reasons` will be saved as a pipe-separated string)\n", "\n", "The `metadata.json` file looks something like this:\n", "\n", "```json\n", "{\n", " \"date_harvested\": \"2021-05-22T22:05:10.705184\", \n", " \"search_params\": {\"results_per_page\": 20, \"sort\": 9, \"record_detail\": \"digitised\"}, \n", " \"search_kwargs\": {\"kw\": \"wragge\"}, \n", " \"total_results\": 208, \n", " \"total_harvested\": 208,\n", " \"total_deduplicated\": 208\n", "}\n", "```\n", "\n", "The 'total' values represent slightly different things:\n", "\n", "* `total_results`: the number of matching results RecordSearch thinks there are\n", "* `total_harvested`: the number of results actually harvested\n", "* `total_deduplicated`: the number of records left after duplicates are removed from the harvested results\n", "\n", "Duplicate records sometimes occur when items have an alternative control symbol. The CSV creation process removes any duplicates.\n", "\n", "The fields in the results files are:\n", "\n", "* `title`\n", "* `identifier` \n", "* `series` \n", "* `control_symbol`\n", "* `digitised_status`\n", "* `digitised_pages` – if `record_detail` is set to 'digitised' or 'full'\n", "* `access_status`\n", "* `access_decision_reasons` – if `record_detail` is set to 'full'\n", "* `location`\n", "* `retrieved` – date/time when this record was retrieved from RecordSearch\n", "* `contents_date_str`\n", "* `contents_start_date`\n", "* `contents_end_date`\n", "* `access_decision_date_str` – if `record_detail` is set to 'full'\n", "* `access_decision_date` – if `record_detail` is set to 'full'\n", "\n", "See below for information on saving digitised images and PDFs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import what we need" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import string\n", "import time\n", "from datetime import datetime\n", "from pathlib import Path\n", "\n", "import pandas as pd\n", "import requests\n", "from IPython.display import HTML, FileLink, display\n", "from recordsearch_data_scraper.scrapers import RSItemSearch\n", "from slugify import slugify\n", "from tqdm.auto import tqdm\n", "\n", "# This is a workaround for a problem with tqdm adding space to cells\n", "HTML(\n", " \"\"\"\n", " \n", "\"\"\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define some functions" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This is basically a list of letters and numbers that we can use to build up control symbol values.\n", "control_range = (\n", " [str(number) for number in range(0, 10)]\n", " + [letter for letter in string.ascii_uppercase]\n", " + [\"/\"]\n", ")\n", "\n", "\n", "def get_results(data_dir, **kwargs):\n", " \"\"\"\n", " Save all the results from a search using the given parameters.\n", " If there are more than 20,000 results, return False.\n", " Otherwise, return the harvested items.\n", " \"\"\"\n", " s = RSItemSearch(**kwargs)\n", " if s.total_results == \"20,000+\":\n", " return False\n", " else:\n", " with tqdm(total=s.total_results, leave=False) as pbar:\n", " more = True\n", " while more:\n", " data = s.get_results()\n", " if data[\"results\"]:\n", " save_to_ndjson(data_dir, data[\"results\"])\n", " pbar.update(len(data[\"results\"]))\n", " time.sleep(0.5)\n", " else:\n", " more = False\n", " return True\n", "\n", "\n", "def refine_controls(current_control, data_dir, **kwargs):\n", " \"\"\"\n", " Add additional letters/numbers to the control symbol wildcard search\n", " until the number of results is less than 20,000.\n", " Then harvest the results.\n", " Returns:\n", " * the RSItemSearch object (containing the search params, total results etc)\n", " * a list containing the harvested items\n", " \"\"\"\n", " for control in control_range:\n", " new_control = current_control.strip(\"*\") + control + \"*\"\n", " # print(new_control)\n", " kwargs[\"control\"] = new_control\n", " results = get_results(data_dir, **kwargs)\n", " # print(total)\n", " if results is False:\n", " refine_controls(new_control, data_dir, **kwargs)\n", "\n", "\n", "def create_data_dir(search, today):\n", " \"\"\"\n", " Create a directory for the harvested data -- using the date and search parameters.\n", " \"\"\"\n", " params = search.params.copy()\n", " params.update(search.kwargs)\n", " search_param_str = slugify(\n", " \"_\".join(\n", " sorted(\n", " [\n", " f\"{k}_{v}\"\n", " for k, v in params.items()\n", " if v is not None and k not in [\"results_per_page\", \"sort\"]\n", " ]\n", " )\n", " )\n", " )\n", " data_dir = Path(\"harvests\", f'{today.strftime(\"%Y%m%d_%H%M%S\")}_{search_param_str}')\n", " data_dir.mkdir(exist_ok=True, parents=True)\n", " return data_dir\n", "\n", "\n", "def save_to_ndjson(data_dir, results):\n", " \"\"\"\n", " Save results into a single, newline delimited JSON file.\n", " \"\"\"\n", " output_file = Path(data_dir, \"results.ndjson\")\n", " with output_file.open(\"a\") as ndjson_file:\n", " for result in results:\n", " ndjson_file.write(json.dumps(result) + \"\\n\")\n", "\n", "\n", "def save_metadata(search, data_dir, today, totals):\n", " \"\"\"\n", " Save information about the harvest to a JSON file.\n", " \"\"\"\n", " metadata = {\n", " \"date_harvested\": today.isoformat(),\n", " \"search_params\": search.params,\n", " \"search_kwargs\": search.kwargs,\n", " \"total_results\": search.total_results,\n", " \"total_harvested\": totals[\"harvested\"],\n", " \"total_after_deduplication\": totals[\"deduped\"],\n", " }\n", "\n", " with Path(data_dir, \"metadata.json\").open(\"w\") as md_file:\n", " json.dump(metadata, md_file)\n", "\n", "\n", "def save_csv(data_dir):\n", " \"\"\"\n", " Save the harvested results as a CSV file, removing any duplicates.\n", " \"\"\"\n", " output_file = Path(data_dir, \"results.csv\")\n", " input_file = Path(data_dir, \"results.ndjson\")\n", " df = pd.read_json(input_file, lines=True)\n", " harvested = df.shape[0]\n", " # Flatten list\n", " try:\n", " df[\"access_decision_reasons\"] = (\n", " df[\"access_decision_reasons\"].dropna().apply(lambda l: \" | \".join(l))\n", " )\n", " except KeyError:\n", " pass\n", " # Remove any duplicates\n", " df.drop_duplicates(inplace=True)\n", " df.to_csv(output_file, index=False)\n", " deduped = df.shape[0]\n", " return {\"harvested\": harvested, \"deduped\": deduped}\n", "\n", "\n", "def harvest_search(**kwargs):\n", " \"\"\"\n", " Harvest all the items from a search using the supplied parameters.\n", " If there are more than 20,000 results, it will use control symbol\n", " wildcard values to try and split the results into harvestable chunks.\n", " \"\"\"\n", " # Initialise the search\n", " search = RSItemSearch(**kwargs)\n", " today = datetime.now()\n", " data_dir = create_data_dir(search, today)\n", " # If there are more than 20,000 results, try chunking using control symbols\n", " if search.total_results == \"20,000+\":\n", " # Loop through the letters and numbers\n", " for control in control_range:\n", " # print(control)\n", " # Add letter/number as a wildcard value\n", " kwargs[\"control\"] = f\"{control}*\"\n", " # Try getting the results\n", " results = get_results(data_dir, **kwargs)\n", " # print(results)\n", " if results is False:\n", " # If there's still more than 20,000, add more letters/numbers to the control symbol!\n", " refine_controls(control, data_dir, **kwargs)\n", " # If there's less than 20,000 results, save them all\n", " else:\n", " get_results(data_dir, **kwargs)\n", " totals = save_csv(data_dir)\n", " save_metadata(search, data_dir, today, totals)\n", " print(f\"Harvest directory: {data_dir}\")\n", " display(FileLink(Path(data_dir, \"metadata.json\")))\n", " display(FileLink(Path(data_dir, \"results.ndjson\")))\n", " display(FileLink(Path(data_dir, \"results.csv\")))\n", " return data_dir\n", "\n", "\n", "def save_images(harvest_dir):\n", " df = pd.read_csv(Path(harvest_dir, \"results.csv\"))\n", " with tqdm(\n", " total=df.loc[df[\"digitised_status\"] == True].shape[0], desc=\"Files\"\n", " ) as pbar:\n", " for item in df.loc[df[\"digitised_status\"] == True].itertuples():\n", " image_dir = Path(\n", " f\"{harvest_dir}/images/{slugify(item.series)}-{slugify(str(item.control_symbol))}-{item.identifier}\"\n", " )\n", "\n", " # Create the folder (and parent if necessary)\n", " image_dir.mkdir(exist_ok=True, parents=True)\n", "\n", " # Loop through the page numbers\n", " for page in tqdm(\n", " range(1, int(item.digitised_pages) + 1), desc=\"Images\", leave=False\n", " ):\n", "\n", " # Define the image filename using the barcode and page number\n", " filename = Path(f\"{image_dir}/{item.identifier}-{page}.jpg\")\n", "\n", " # Check to see if the image already exists (useful if rerunning a failed harvest)\n", " if not filename.exists():\n", " # If it doens't already exist then download it\n", " img_url = f\"https://recordsearch.naa.gov.au/NaaMedia/ShowImage.asp?B={item.identifier}&S={page}&T=P\"\n", " response = requests.get(img_url)\n", " try:\n", " response.raise_for_status()\n", " except requests.exceptions.HTTPError:\n", " pass\n", " else:\n", " filename.write_bytes(response.content)\n", "\n", " time.sleep(0.5)\n", " pbar.update(1)\n", "\n", "\n", "def save_pdfs(harvest_dir):\n", " df = pd.read_csv(Path(harvest_dir, \"results.csv\"))\n", " pdf_dir = Path(harvest_dir, \"pdfs\")\n", " pdf_dir.mkdir(exist_ok=True, parents=True)\n", " with tqdm(\n", " total=df.loc[df[\"digitised_status\"] == True].shape[0], desc=\"Files\"\n", " ) as pbar:\n", " for item in df.loc[df[\"digitised_status\"] == True].itertuples():\n", " pdf_file = Path(\n", " pdf_dir,\n", " f\"{slugify(item.series)}-{slugify(str(item.control_symbol))}-{item.identifier}.pdf\",\n", " )\n", " if not pdf_file.exists():\n", " pdf_url = f\"https://recordsearch.naa.gov.au/SearchNRetrieve/NAAMedia/ViewPDF.aspx?B={item.identifier}&D=D\"\n", " response = requests.get(pdf_url)\n", " try:\n", " response.raise_for_status()\n", " except requests.exceptions.HTTPError:\n", " pass\n", " else:\n", " pdf_file.write_bytes(response.content)\n", " time.sleep(0.5)\n", " pbar.update(1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Start a harvest\n", "\n", "Insert your search parameters in the brackets below.\n", "\n", "Examples:\n", "\n", "* `search, items = harvest_search(kw='rabbit')`\n", "* `search, items = harvest_search(kw='rabbit', digital=True)`\n", "* `search, items = harvest_search(record_detail='full', kw='rabbit', series='A1)`\n", "* `search, items = harvest_search(series='B13')`\n", "\n", "If you're running a long harvest, there's a good chance it will get interrupted at some point. Don't worry, just run the cell above again. The scraper caches your results, so it won't need to start from scratch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_dir = harvest_search(kw=\"wragge exhibit\", record_detail=\"digitised\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [ "nbval-skip" ] }, "source": [ "## Saving images from digitised files\n", "\n", "Once you've saved all the metadata from your search, you can use it to download images from all the items that have been digitised.\n", "\n", "Note that you can only save the images if you set the `record_detail` parameter to 'digitised' or 'full' in the original harvest.\n", "\n", "The function below will look for all items that have a `digitised_pages` value in the harvest results, and then download an image for each page. The images will be saved in an `images` subdirectory, inside the original harvest directory." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbval-skip" ] }, "outputs": [], "source": [ "# Supply the path to the directory containing the harvested data\n", "# This is the value returned by the `harvest_search()` function.\n", "# eg: 'harvests/20210522_digital_True_kw_wragge_record_detail_full'\n", "save_images(data_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Saving digitised files as PDFs\n", "\n", "You can also save digitised files as PDFs. The function below will save any digisted files in the results to a `pdfs` subdirectory within the harvest directory." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "nbval-skip" ] }, "outputs": [], "source": [ "# Supply the path to the directory containing the harvested data\n", "# This is the value returned by the `harvest_search()` function.\n", "# eg: 'harvests/20210522_digital_True_kw_wragge_record_detail_full'\n", "save_pdfs(data_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "----\n", "\n", "Created by [Tim Sherratt](https://timsherratt.org/) for the [GLAM Workbench](https://glam-workbench.github.io/). Support me by becoming a [GitHub sponsor](https://github.com/sponsors/wragge)!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": {}, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 4 }