Model progress can be saved after as well as during training. This means a model can resume where it left off and avoid long training times. Saving also means you can share your model and others can recreate your work. When publishing research models and techniques, most machine learning practitioners share:

  • code to create the model, and
  • the trained weights, or parameters, for the model

Sharing this data helps others understand how the model works and try it themselves with new data.

Define a model

Let’s build a simple model we’ll use to demonstrate saving and loading weights.

Layer (type)                          Output Shape                      Param #      
=====================================================================================
dense_1 (Dense)                       (None, 512)                       401920       
_____________________________________________________________________________________
dropout_1 (Dropout)                   (None, 512)                       0            
_____________________________________________________________________________________
dense_2 (Dense)                       (None, 10)                        5130         
=====================================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_____________________________________________________________________________________

Save the entire model

The habitual form of saving a Keras model is saving to the HDF5 format.

The resulting file contains the weight values, the model’s configuration, and even the optimizer’s configuration. This allows you to save a model and resume training later — from the exact same state — without access to the original code.

model <- create_model()

model %>% fit(train_images, train_labels, epochs = 5)

model %>% save_model_hdf5("my_model.h5")

If you only wanted to save the weights, you could replace that last line by

model %>% save_model_weights_hdf5("my_model_weights.h5")

Now recreate the model from that file:

_____________________________________________________________________________________
Layer (type)                          Output Shape                      Param #      
=====================================================================================
dense_3 (Dense)                       (None, 512)                       401920       
_____________________________________________________________________________________
dropout_2 (Dropout)                   (None, 512)                       0            
_____________________________________________________________________________________
dense_4 (Dense)                       (None, 10)                        5130         
=====================================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_____________________________________________________________________________________

Save checkpoints during training

It is useful to automatically save checkpoints during and at the end of training. This way you can use a trained model without having to retrain it, or pick-up training where you left of, in case the training process was interrupted.

callback_model_checkpoint is a callback that performs this task.

The callback takes a couple of arguments to configure checkpointing. By default, save_weights_only is set to false, which means the complete model is being saved - including architecture and configuration. You can then restore the model as outlined in the previous paragraph.

Now here, let’s focus on just saving and restoring weights. In the following code snippet, we are setting save_weights_only to true, so we will need the model definition on restore.

The filepath argument can contain named formatting options, for example: if filepath is weights.{epoch:02d}-{val_loss:.2f}.hdf5, then the model checkpoints will be saved with the epoch number and the validation loss in the filename.

The saved model weights again will be in HDF5 format.

Checkpoint callback usage

Train the model and pass it the callback_model_checkpoint:

Inspect the files that were created:

 [1] "weights.01-0.72.hdf5" "weights.02-0.51.hdf5" "weights.03-0.47.hdf5"
 [4] "weights.04-0.45.hdf5" "weights.05-0.42.hdf5" "weights.06-0.44.hdf5"
 [7] "weights.07-0.42.hdf5" "weights.08-0.40.hdf5" "weights.09-0.42.hdf5"
[10] "weights.10-0.42.hdf5"

Create a new, untrained model. When restoring a model from only weights, you must have a model with the same architecture as the original model. Since it’s the same model architecture, we can share weights despite that it’s a different instance of the model.

Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):

1000/1000 [==============================] - 0s 170us/step
Test loss: 2.411125 
Test accuracy: 0.088 

Then load the weights from the latest checkpoint (epoch 10), and re-evaluate:

1000/1000 [==============================] - 0s 34us/step
[1] "Test loss: 0.394947263240814"
[1] "Test accuracy: 0.873"

To reduce the number of files, you can also save model weights only once every \(n\)th epoch. E.g.,

[1] "weights.05-0.41.hdf5" "weights.10-0.41.hdf5"

Alternatively, you can also decide to save only the best model, where best by default is defined as validation loss. See the documentation for callback_model_checkpoint for further information.

[1] "weights.01-0.72.hdf5" "weights.02-0.54.hdf5" "weights.03-0.46.hdf5"
[4] "weights.04-0.45.hdf5" "weights.05-0.43.hdf5" "weights.06-0.42.hdf5"
[7] "weights.09-0.41.hdf5"

In this case, weights were saved on all epochs but the 6th and 7th, where validation loss did not improve.

More Tutorials

Check out these additional tutorials to learn more:

  • Basic Classification — In this tutorial, we train a neural network model to classify images of clothing, like sneakers and shirts.

  • Text Classification — This tutorial classifies movie reviews as positive or negative using the text of the review.

  • Basic Regression — This tutorial builds a model to predict the median price of homes in a Boston suburb during the mid-1970s.

  • Overfitting and Underfitting — In this tutorial, we explore two common regularization techniques (weight regularization and dropout) and use them to improve our movie review classification results.