{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "1l8bWGmIJuQa" }, "source": [ "##### Copyright 2019 The TensorFlow Authors.\n" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "cellView": "form", "colab": {}, "colab_type": "code", "id": "CPSnXS88KFEo" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "89xNCIO5hiCj" }, "source": [ "# Save and load a model using a distribution strategy" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "9Ejs4QVxIdAm" }, "source": [ "\n", " \n", " \n", " \n", " \n", "\n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "A0lG6qgThxAS" }, "source": [ "## Overview\n", "\n", "It's common to save and load a model during training. There are two sets of APIs for saving and loading a keras model: a high-level API, and a low-level API. This tutorial demonstrates how you can use the SavedModel APIs when using `tf.distribute.Strategy`. To learn about SavedModel and serialization in general, please read the [saved model guide](../../guide/saved_model.ipynb), and the [Keras model serialization guide](../../guide/keras/save_and_serialize.ipynb). Let's start with a simple example: " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "FITHltVKQ4eZ" }, "source": [ "Import dependencies:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "RWG5HchAiOrZ" }, "outputs": [], "source": [ "!pip install tf-nightly\n", "import tensorflow_datasets as tfds\n", "\n", "import tensorflow as tf\n", "tfds.disable_progress_bar()" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "qqapWj98ptNV" }, "source": [ "Prepare the data and model using `tf.distribute.Strategy`:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "yrYiAf_ziRyw" }, "outputs": [], "source": [ "mirrored_strategy = tf.distribute.MirroredStrategy()\n", "\n", "def get_data():\n", " datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)\n", " mnist_train, mnist_test = datasets['train'], datasets['test']\n", "\n", " BUFFER_SIZE = 10000\n", "\n", " BATCH_SIZE_PER_REPLICA = 64\n", " BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync\n", "\n", " def scale(image, label):\n", " image = tf.cast(image, tf.float32)\n", " image /= 255\n", "\n", " return image, label\n", "\n", " train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\n", " eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)\n", "\n", " return train_dataset, eval_dataset\n", "\n", "def get_model():\n", " with mirrored_strategy.scope():\n", " model = tf.keras.Sequential([\n", " tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n", " tf.keras.layers.MaxPooling2D(),\n", " tf.keras.layers.Flatten(),\n", " tf.keras.layers.Dense(64, activation='relu'),\n", " tf.keras.layers.Dense(10)\n", " ])\n", "\n", " model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=tf.keras.optimizers.Adam(),\n", " metrics=['accuracy'])\n", " return model" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "qmU4Y3feS9Na" }, "source": [ "Train the model: " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "zmGurbJmS_vN" }, "outputs": [], "source": [ "model = get_model()\n", "train_dataset, eval_dataset = get_data()\n", "model.fit(train_dataset, epochs=2)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "L01wjgvRizHS" }, "source": [ "## Save and load the model\n", "\n", "Now that you have a simple model to work with, let's take a look at the saving/loading APIs. \n", "There are two sets of APIs available:\n", "\n", "* High level keras `model.save` and `tf.keras.models.load_model`\n", "* Low level `tf.saved_model.save` and `tf.saved_model.load`\n" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "FX_IF2F1tvFs" }, "source": [ "### The Keras APIs" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "O8xfceg4Z3H_" }, "source": [ "Here is an example of saving and loading a model with the Keras APIs:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "LYOStjV5knTQ" }, "outputs": [], "source": [ "keras_model_path = \"/tmp/keras_save\"\n", "model.save(keras_model_path) # save() should be called out of strategy scope" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "yvQIdQp3zNMp" }, "source": [ "Restore the model without `tf.distribute.Strategy`:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "WrXAAVtrzRgv" }, "outputs": [], "source": [ "restored_keras_model = tf.keras.models.load_model(keras_model_path)\n", "restored_keras_model.fit(train_dataset, epochs=2)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "gYAnskzorda-" }, "source": [ "After restoring the model, you can continue training on it, even without needing to call `compile()` again, since it is already compiled before saving. The model is saved in the TensorFlow's standard `SavedModel` proto format. For more information, please refer to [the guide to `saved_model` format](../../guide/saved_model.ipynb).\n", "\n", "It is important to only call the `model.save()` method out of the scope of `tf.distribute.strategy`. Calling it within the scope is not supported.\n", "\n", "Now to load the model and train it using a `tf.distribute.Strategy`:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "wROPrJaAqBQz" }, "outputs": [], "source": [ "another_strategy = tf.distribute.OneDeviceStrategy(\"/cpu:0\")\n", "with another_strategy.scope():\n", " restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)\n", " restored_keras_model_ds.fit(train_dataset, epochs=2)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "PdiiPmL5tQk5" }, "source": [ "As you can see, loading works as expected with `tf.distribute.Strategy`. The strategy used here does not have to be the same strategy used before saving. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "3CrXIbmFt0f6" }, "source": [ "### The `tf.saved_model` APIs" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "HtGzPp6et4Em" }, "source": [ "Now let's take a look at the lower level APIs. Saving the model is similar to the keras API:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "4y6T31APuCqK" }, "outputs": [], "source": [ "model = get_model() # get a fresh model\n", "saved_model_path = \"/tmp/tf_save\"\n", "tf.saved_model.save(model, saved_model_path)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "q1QNRYcwuRll" }, "source": [ "Loading can be done with `tf.saved_model.load()`. However, since it is an API that is on the lower level (and hence has a wider range of use cases), it does not return a Keras model. Instead, it returns an object that contain functions that can be used to do inference. For example:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "aaEKqBSPwAuM" }, "outputs": [], "source": [ "DEFAULT_FUNCTION_KEY = \"serving_default\"\n", "loaded = tf.saved_model.load(saved_model_path)\n", "inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x65l7AaHUZCA" }, "source": [ "The loaded object may contain multiple functions, each associated with a key. The `\"serving_default\"` is the default key for the inference function with a saved Keras model. To do an inference with this function: " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "5Ore5q8-UjW1" }, "outputs": [], "source": [ "predict_dataset = eval_dataset.map(lambda image, label: image)\n", "for batch in predict_dataset.take(1):\n", " print(inference_func(batch))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "osB1LY8WwUJZ" }, "source": [ "You can also load and do inference in a distributed manner:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "iDYvu12zYTmT" }, "outputs": [], "source": [ "another_strategy = tf.distribute.MirroredStrategy()\n", "with another_strategy.scope():\n", " loaded = tf.saved_model.load(saved_model_path)\n", " inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]\n", "\n", " dist_predict_dataset = another_strategy.experimental_distribute_dataset(\n", " predict_dataset)\n", "\n", " # Calling the function in a distributed manner\n", " for batch in dist_predict_dataset:\n", " another_strategy.run(inference_func,args=(batch,))" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hWGSukoyw3fF" }, "source": [ "Calling the restored function is just a forward pass on the saved model (predict). What if yout want to continue training the loaded function? Or embed the loaded function into a bigger model? A common practice is to wrap this loaded object to a Keras layer to achieve this. Luckily, [TF Hub](https://www.tensorflow.org/hub) has [hub.KerasLayer](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/keras_layer.py) for this purpose, shown here:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "clfk3hQoyKu6" }, "outputs": [], "source": [ "import tensorflow_hub as hub\n", "\n", "def build_model(loaded):\n", " x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')\n", " # Wrap what's loaded to a KerasLayer\n", " keras_layer = hub.KerasLayer(loaded, trainable=True)(x)\n", " model = tf.keras.Model(x, keras_layer)\n", " return model\n", "\n", "another_strategy = tf.distribute.MirroredStrategy()\n", "with another_strategy.scope():\n", " loaded = tf.saved_model.load(saved_model_path)\n", " model = build_model(loaded)\n", "\n", " model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " optimizer=tf.keras.optimizers.Adam(),\n", " metrics=['accuracy'])\n", " model.fit(train_dataset, epochs=2)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Oe1z_OtSJlu2" }, "source": [ "As you can see, `hub.KerasLayer` wraps the result loaded back from `tf.saved_model.load()` into a Keras layer that can be used to build another model. This is very useful for transfer learning. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "KFDOZpK5Wa3W" }, "source": [ "### Which API should I use?" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "GC6GQ9HDLxD6" }, "source": [ "For saving, if you are working with a keras model, it is almost always recommended to use the Keras's `model.save()` API. If what you are saving is not a Keras model, then the lower level API is your only choice. \n", "\n", "For loading, which API you use depends on what you want to get from the loading API. If you cannot (or do not want to) get a Keras model, then use `tf.saved_model.load()`. Otherwise, use `tf.keras.models.load_model()`. Note that you can get a Keras model back only if you saved a Keras model. \n", "\n", "It is possible to mix and match the APIs. You can save a Keras model with `model.save`, and load a non-Keras model with the low-level API, `tf.saved_model.load`. " ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "Ktwg2GwnXE8v" }, "outputs": [], "source": [ "model = get_model()\n", "\n", "# Saving the model using Keras's save() API\n", "model.save(keras_model_path) \n", "\n", "another_strategy = tf.distribute.MirroredStrategy()\n", "# Loading the model using lower level API\n", "with another_strategy.scope():\n", " loaded = tf.saved_model.load(keras_model_path)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "hJTWOnC9iuA3" }, "source": [ "### Caveats" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Tzog2ti7YYgy" }, "source": [ "A special case is when you have a Keras model that does not have well-defined inputs. For example, a Sequential model can be created without any input shapes (`Sequential([Dense(3), ...]`). Subclassed models also do not have well-defined inputs after initialization. In this case, you should stick with the lower level APIs on both saving and loading, otherwise you will get an error. \n", "\n", "To check if your model has well-defined inputs, just check if `model.inputs` is `None`. If it is not `None`, you are all good. Input shapes are automatically defined when the model is used in `.fit`, `.evaluate`, `.predict`, or when calling the model (`model(inputs)`). \n", "\n", "Here is an example:" ] }, { "cell_type": "code", "execution_count": 0, "metadata": { "colab": {}, "colab_type": "code", "id": "gurSIbDFjOBc" }, "outputs": [], "source": [ "class SubclassedModel(tf.keras.Model):\n", "\n", " output_name = 'output_layer'\n", "\n", " def __init__(self):\n", " super(SubclassedModel, self).__init__()\n", " self._dense_layer = tf.keras.layers.Dense(\n", " 5, dtype=tf.dtypes.float32, name=self.output_name)\n", "\n", " def call(self, inputs):\n", " return self._dense_layer(inputs)\n", "\n", "my_model = SubclassedModel()\n", "# my_model.save(keras_model_path) # ERROR! \n", "tf.saved_model.save(my_model, saved_model_path)" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "save_and_load.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }