--- title: "MNIST - CNN" author: "Dr. Hua Zhou" date: "3/15/2018" output: html_document: toc: true toc_depth: 4 --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` ```{r} sessionInfo() ``` Source: In this example, we train a convolutional neural networks (CNN) on the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) data set. Achieve testing accuracy 99.16% after 12 epochs. - The **MNIST** database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits ($28 \times 28$) that is commonly used for training and testing machine learning algorithms. - 60,000 training images, 10,000 testing images. ## Prepare data For CNN, instead of vectorizing the images, we keep the 2D structure. ```{r} library(keras) # Data Preparation ----------------------------------------------------- batch_size <- 128 num_classes <- 10 epochs <- 12 # Input image dimensions img_rows <- 28 img_cols <- 28 # The data, shuffled and split between train and test sets mnist <- dataset_mnist() x_train <- mnist$train$x y_train <- mnist$train$y x_test <- mnist$test$x y_test <- mnist$test$y # Redefine dimension of train/test inputs x_train <- array_reshape(x_train, c(nrow(x_train), img_rows, img_cols, 1)) x_test <- array_reshape(x_test, c(nrow(x_test), img_rows, img_cols, 1)) input_shape <- c(img_rows, img_cols, 1) # Transform RGB values into [0,1] range x_train <- x_train / 255 x_test <- x_test / 255 cat('x_train_shape:', dim(x_train), '\n') cat(nrow(x_train), 'train samples\n') cat(nrow(x_test), 'test samples\n') # Convert class vectors to binary class matrices y_train <- to_categorical(y_train, num_classes) y_test <- to_categorical(y_test, num_classes) ``` ## Define model Define model: ```{r} model <- keras_model_sequential() model %>% layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu', input_shape = input_shape) %>% layer_conv_2d(filters = 64, kernel_size = c(3,3), activation = 'relu') %>% layer_max_pooling_2d(pool_size = c(2, 2)) %>% layer_dropout(rate = 0.25) %>% layer_flatten() %>% layer_dense(units = 128, activation = 'relu') %>% layer_dropout(rate = 0.5) %>% layer_dense(units = num_classes, activation = 'softmax') ``` Compile model: ```{r} # Compile model model %>% compile( loss = loss_categorical_crossentropy, optimizer = optimizer_adadelta(), metrics = c('accuracy') ) summary(model) ``` ## Train and evaluate ```{r} system.time({ history <- model %>% fit( x_train, y_train, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = list(x_test, y_test) ) }) plot(history) ``` Testing: ```{r} scores <- model %>% evaluate( x_test, y_test, verbose = 0 ) # Output metrics cat('Test loss:', scores[[1]], '\n') cat('Test accuracy:', scores[[2]], '\n') ```