{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook, we're following the [Tensorflow for Poets](https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0) tutorial, using Python to make an archaeological image classifier. You need tensorflow 1.15. This notebook + binder already has tensorflow installed. If you were working on your own machine, you'd install tensorflow with \n", "\n", "```pip install --upgrade \"tensorflow==1.15.*\"```\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We need to get some training data, organized like so (imagining we were building a classifier that recognized Roman pottery types):\n", "```\n", "|\n", "|-training-images\n", " |\n", " |-Roman_pottery\n", " |-terrasig\n", " |-african_red_slip\n", " |-veranice_nera\n", "```\n", "\n", "That is, each category gets its own directory, and the category label is the directory name.\n", "\n", "If you examine the directory structure in this notebook binder, you'll see in the[tf_files/gallery](tf_files/gallery) folder that we've already provided two categories with some images in each. We grabbed this data by scraping the images at [the Atlas of Roman Pottery](http://potsherds.net) and by downloading data from the [Portable Antiquities Scheme](http://finds.org.uk). \n", "\n", "This notebook is only a demonstration; for your own needs you'd need several thousand images overall (and generally, at least twenty for each category), and put them in the tf_files folder.\n", "\n", "So let's get started." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Retraining Script\n", "\n", "The script we're going to use is called `retrain`. You can look at the script [here](/edit/scripts/retrain.py) (don't make any changes to it!). We're going to call the script in the codeblock below, but let's take a look at it first:\n", "\n", "```\n", "python -m scripts.retrain \\\n", " --bottleneck_dir=tf_files/bottlenecks \\\n", " --how_many_training_steps=500 \\\n", " --model_dir=tf_files/models/ \\\n", " --summaries_dir=tf_files/training_summaries/\"${ARCHITECTURE}\" \\\n", " --output_graph=tf_files/retrained_graph.pb \\\n", " --output_labels=tf_files/retrained_labels.txt \\\n", " --architecture=\"${ARCHITECTURE}\" \\\n", " --image_dir=tf_files/{training_images}\n", "```\n", "\n", "The last flag, `image_dir` is just the path to the training images relative to the script.\n", "\n", "See that `${ARCHITECTURE]}` bit? That refers to an already-trained model that we are going to add to with our images. What we are doing is adding another layer to the model that takes what the model has already learned about the world and gives it just a bit more information about our particular use-case. If we use the Inception-v3 model, that means we're adding (2048 x n categories) model parameters corresponding to weights in the network. If we use the Mobilenet architecture, which 'sees' only 1001 dimensions, that means we're adding (1001 x n categories) model parameters. We're going to use mobilenet, which has\n", "\n", "> ... 32 different Mobilenet models to choose from, with a variety of file\n", "size and latency options. The first number can be '1.0', '0.75', '0.50', or\n", "'0.25' to control the size, and the second controls the input image size, either\n", "'224', '192', '160', or '128', with smaller sizes running faster\n", "\n", "We're going to use `mobilenet_0.50_224` for now.\n", "\n", "Because our training data is very small in this demo (anything less than 10000 images counts as small!), we have to add another flag to our command, concerning validation batch size. Otherwise we’ll get an error message. Note also that we’re only training for 500 steps; more steps will generally get better results (but diminishing returns also apply). So we'll add this flag:\n", "\n", "`--validation_batch_size=-1 `\n", "\n", "The codeblock below has our finished command. When you run it, it *might* look like there's been some errors, but just scroll down the result pane. Eventually you'll start seeing data on how well the training is going. This might take a few minutes, depending on how much data we're computing. You'll know it's finished when you see `INFO:tensorflow:Final test accuracy` and the asterisk to the top left of the codeblock disappears.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python -m scripts.retrain \\\n", " --bottleneck_dir=tf_files/bottlenecks \\\n", " --how_many_training_steps=500 \\\n", " --model_dir=tf_files/models/ \\\n", " --summaries_dir=tf_files/training_summaries/mobilenet_0.50_224 \\\n", " --output_graph=tf_files/retrained_graph.pb \\\n", " --output_labels=tf_files/retrained_labels.txt \\\n", " --architecture mobilenet_0.50_224 \\\n", " --validation_batch_size=-1 \\\n", " --image_dir=tf_files/gallery" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This will run for a while. Elements to explore: different mobilenet architectures, increasing iterations, more data. Once it’s finished training, let’s test it:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!python -m scripts.label_image \\\n", " --graph=tf_files/retrained_graph.pb \\\n", " --image=tf_files/testing/B4-1-f.jpg" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The result indicates the probability that the image corresponds to the category. Find an image of an amphora or a Roman fibula, load it into the `testing` subfolder, and modify the code above to try out. Or just change the path to one of the other sample photos.\n", "\n", "Congratulations - you've used transfer learning to build a (very small) image classifier for archaeology!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.5" } }, "nbformat": 4, "nbformat_minor": 2 }