{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "## Открытый курс по машинному обучению\n", "
Автор материала: Емельянов Георгий (@georguy), студент 4 курса СПБГУАП" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#
Convolution Neural Networks. Traffic Signs Recognition
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "В этом туториале речь пойдет про распознавание дорожных знаков с использованием сверточных нейронных сетей.\n", "В сети можно легко найти материалы по сверточным нейронным сетям, поэтому здесь не будет большого количества теории и описания библиотеки TensorFlow, с помощью которой мы и будем сегодня строить сеть. В конце статьи находится список полезной литературы, который поможет глубже узнать сверточные нейронные сети." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## СNN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Особенностью сверточных нейронных сетей является наличие так называемых сверточных слоев, которые выполняют операцию свертки. Выражение «свернуть изображение» означает, что необходимо пространственно пробежать по изображению и вычислить скалярные произведения.\n", "\tКаждый сверточный слой передаёт в новый слой так называемые карты признаков, которые и являются результатами скалярного произведения.\n", "\tЭти карты признаков поступают на вход субдискретизирующих слоёв, задача которых состоит в уменьшении размерность карт признаков.\n", "\tДля уменьшения размерности часто используют одну из двух функций: max pooling, average pooling. Первая функция выбирает максимальное значение уменьшаемого участка карты признаков. Вторая функция выбирает уже среднее значение. Таким образом находится признак, наличие которого является наиболее ключевым фактором отнесения классифицируемого изображение к одному из классов." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Сразу в бой\n", "Не будем раскачиваться на теории, а сразу возьмемся за практику. По [ссылке](https://drive.google.com/file/d/0ByDNm-bvLJQEN2xtTHo4STgyMkE/view?usp=sharing) вы можете скачать тренировочные и тестовые наборы картинок в усеченном и расширенном объеме. В туториале используется усеченная выборка для экономии времени - в тренировочной выборке содержится всего 11100 изображений(в то время как в расширенном - 473000).\n", "\n", "Для построения модели, обучения и тестирования используется Tensorflow версии 0.12.1.\n", "\n", "Приступим." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Импортируем все необходимые библиотеки\n", "import warnings\n", "\n", "warnings.filterwarnings('ignore')\n", "%matplotlib inline\n", "import pickle\n", "import sys\n", "import time\n", "\n", "import matplotlib\n", "import numpy as np\n", "from pandas.io.parsers import read_csv\n", "\n", "matplotlib.use('TkAgg', warn = False)\n", "import tensorflow as tf\n", "from matplotlib import pyplot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Функция для загрузки dataset'ов\n", "def load_pickled_data(file, columns):\n", " with open(file, mode='rb') as f:\n", " dataset = pickle.load(f)\n", " return tuple(map(lambda c: dataset[c], columns))\n", "\n", "# Функция для подсчета времени- понадобится при логировании времени обучения\n", "def get_time_hhmmss(start = None):\n", " if start is None:\n", " return time.strftime(\"%Y/%m/%d %H:%M:%S\")\n", " end = time.time()\n", " m, s = divmod(end - start, 60)\n", " h, m = divmod(m, 60)\n", " time_str = \"%02d:%02d:%02d\" % (h, m, s)\n", " return time_str" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Посмотрим на тренировочный и тестовый наборы данных" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dataset_path = \"../../../data/german-traffic-signs/truncated/original/train.p\"\n", "test_dataset_path = \"../../../data/german-traffic-signs/truncated/original/test.p\"\n", "\n", "X_train, Y_train = load_pickled_data(train_dataset_path, ['features', \n", " 'labels'])\n", "X_test, Y_test = load_pickled_data(test_dataset_path, ['features',\n", " 'labels'])\n", "n_train = len(Y_train)\n", "n_test = len(Y_test)\n", "image_shape = X_train[0].shape\n", "image_size = image_shape[0]\n", "sign_classes, class_indices, class_counts = np.unique(Y_train, return_index=True, return_counts=True)\n", "n_classes = len(class_counts)\n", "print(\"Число тренировочных изображений =\", n_train)\n", "print(\"Число тестовых изображений =\", n_test)\n", "print(\"Shape изображений =\", image_shape)\n", "print(\"Число классов =\", n_classes)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import random\n", "\n", "signnames = read_csv(\"../../../data/german-traffic-signs/truncated/signnames.csv\").values[:, 1]\n", "\n", "col_width = max(len(name) for name in signnames)\n", "\n", "for c, c_index, c_count in zip(sign_classes, class_indices, class_counts):\n", " print(\"Класс %i: %-*s %s изображений\" %(c, col_width, signnames[c], str(c_count)))\n", " fig = pyplot.figure(figsize = (6, 1))\n", " fig.subplots_adjust(left = 0, right = 1, bottom = 0, top = 1, hspace = 0.05, wspace = 0.05)\n", " random_indices = random.sample(range(c_index, c_index + c_count), 10)\n", " for i in range(10):\n", " axis = fig.add_subplot(1, 10, i + 1, xticks=[], yticks=[])\n", " axis.imshow(X_train[random_indices[i]])\n", " pyplot.show()\n", " \n", "fig = pyplot.figure(figsize = (9, 5))\n", "\n", "pyplot.bar( np.arange( n_classes ), class_counts, align='center' )\n", "pyplot.xlabel('Класс')\n", "pyplot.ylabel('Количество изображений')\n", "pyplot.xlim([-1, n_classes])\n", "pyplot.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "В тренировочной выборке присутствуют 6 классов дорожных знаков, изображения в 5 из которых распределены примерно равномерно, за исключением 6-го класса, в котором всего лишь 300 объектов. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Видим, что изображения цветные – содержат три канала. Но наша модель будет распознавать изображения заранее обработанные – лишь с одним каналом.\n", "\n", "Исходные изображения находятся в цветовом пространстве RGB. Удобным способом снижения размерности является преобразование исходного цветного изображения в черно-белое изображение с помощью компоненты яркости Y. Y – одна из составляющих цветового пространства YCbCr.\n", "\n", "Обработанные данные уже содержатся в скачанном архиве, поэтому обработкой заниматься вам не придется." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_preprocessed_dataset_path = \"../../../data/german-traffic-signs/truncated/preprocessed/train_preprocessed.p\"\n", "test_preprocessed_dataset_path = \"../../../data/german-traffic-signs/truncated/preprocessed/test_preprocessed.p\"\n", "\n", "X_train_preprocessed, Y_train_preprocessed = load_pickled_data(train_preprocessed_dataset_path, ['features', \n", " 'labels'])\n", "X_test_preprocessed, Y_test_preprocessed = load_pickled_data(test_preprocessed_dataset_path, ['features',\n", " 'labels'])\n", "image_shape = X_train_preprocessed[0].shape\n", "image_size = image_shape[0]\n", "print(\"Shape изображений =\", image_shape)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "col_width = max(len(name) for name in signnames)\n", "\n", "for c, c_index, c_count in zip(sign_classes, class_indices, class_counts):\n", " print(\"Класс %i: %-*s %s изображений\" %(c, col_width, signnames[c], str(c_count)))\n", " fig = pyplot.figure(figsize = (6, 1))\n", " fig.subplots_adjust(left = 0, right = 1, bottom = 0, top = 1, hspace = 0.05, wspace = 0.05)\n", " random_indices = random.sample(range(c_index, c_index + c_count), 10)\n", " for i in range(10):\n", " axis = fig.add_subplot(1, 10, i + 1, xticks=[], yticks=[])\n", " axis.imshow(X_train_preprocessed[random_indices[i]].reshape(32, 32), cmap='gray')\n", " pyplot.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Архитектура нашей сети будет иметь следующий вид\n", "\n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Первым делом давайте определим структуру для удобной организации параметров сети" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from collections import namedtuple\n", "\n", "# Структура для организации параметров модели\n", "Parameters = namedtuple('Parameters', [\n", " # Данные об изображениях\n", " 'num_classes', 'image_size', \n", " # Параметры обучения\n", " 'batch_size', 'max_epochs', 'log_epoch', 'print_epoch',\n", " # Оптимизации\n", " 'learning_rate_decay', 'learning_rate',\n", " 'l2_reg_enabled', 'l2_lambda', \n", " 'early_stopping_enabled', 'early_stopping_patience', \n", " 'resume_training', \n", " # Архитектура слоёв\n", " 'conv1_k', 'conv1_d', 'conv1_p', \n", " 'conv2_k', 'conv2_d', 'conv2_p', \n", " 'conv3_k', 'conv3_d', 'conv3_p', \n", " 'fc4_size', 'fc4_p'\n", " ])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Так же нам понадобится класс, с помощью которого мы сможем получить информацию о параметрах сети во время и после её обучения" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "\n", "class Paths(object):\n", " def __init__(self, params):\n", " self.model_name = self.get_model_name(params)\n", " self.var_scope = self.get_variables_scope(params)\n", " self.root_path = os.getcwd() + \"/models/\" + self.model_name + \"/\"\n", " self.model_path = self.get_model_path()\n", " self.train_history_path = self.get_train_history_path()\n", " self.learning_curves_path = self.get_learning_curves_path()\n", " os.makedirs(self.root_path, exist_ok = True)\n", " \n", " def get_model_name(self, params):\n", " model_name = \"k{}d{}p{}_k{}d{}p{}_k{}d{}p{}_fc{}p{}\".format(\n", " params.conv1_k, params.conv1_d, params.conv1_p, \n", " params.conv2_k, params.conv2_d, params.conv2_p, \n", " params.conv3_k, params.conv3_d, params.conv3_p, \n", " params.fc4_size, params.fc4_p\n", " )\n", " model_name += \"_lrdec\" if params.learning_rate_decay else \"_no-lrdec\"\n", " model_name += \"_l2\" if params.l2_reg_enabled else \"_no-l2\"\n", " return model_name\n", " \n", " def get_variables_scope(self, params):\n", " var_scope = \"k{}d{}_k{}d{}_k{}d{}_fc{}_fc0\".format(\n", " params.conv1_k, params.conv1_d,\n", " params.conv2_k, params.conv2_d,\n", " params.conv3_k, params.conv3_d, \n", " params.fc4_size\n", " )\n", " return var_scope\n", " \n", " def get_model_path(self):\n", " return self.root_path + \"model.ckpt\"\n", " \n", " def get_train_history_path(self):\n", " return self.root_path + \"train_history\"\n", " \n", " def get_learning_curves_path(self):\n", " return self.root_path + \"learning_curves.png\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Создадим класс EarlyStopping, который поможет избежать переобучения, а также поможет следить за лучшими результатами правильных ответов" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class EarlyStopping(object):\n", " def __init__(self, saver, session, patience = 50, minimize = True):\n", " self.minimize = minimize\n", " self.patience = patience\n", " self.saver = saver\n", " self.session = session\n", " self.best_monitored_value = np.inf if minimize else 0.\n", " self.best_monitored_epoch = 0\n", " self.restore_path = None\n", " \n", " def __call__(self, value, epoch):\n", " if (self.minimize and value < self.best_monitored_value) or (not self.minimize and value > self.best_monitored_value):\n", " self.best_monitored_value = value\n", " self.best_monitored_epoch = epoch\n", " self.restore_path = self.saver.save(self.session, os.getcwd() + \"/early_stopping_checkpoint\")\n", " elif self.best_monitored_epoch + self.patience < epoch:\n", " if self.restore_path != None:\n", " self.saver.restore(self.session, self.restore_path)\n", " else:\n", " print(\"ERROR: Failed to restore session\")\n", " return True\n", " \n", " return False" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Определяем функцию для логирования этапов и состояния обучения" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from cloudlog import CloudLog\n", "\n", "\n", "class ModelCloudLog(CloudLog):\n", " \n", " def log_parameters(self, params, train_size, valid_size, test_size):\n", " if params.resume_training:\n", " self(\"===============================================\")\n", " self(\"============= Продолжение обучения ============\")\n", " self(\"===============================================\")\n", "\n", " self(\"=================== Данные ====================\")\n", " self(\" Train dataset: {} изображений\".format(train_size))\n", " self(\" Valid dataset: {} изображений\".format(valid_size))\n", " self(\" Test dataset: {} изображений\".format(test_size))\n", " self(\" Batch size: {}\".format(params.batch_size)) \n", "\n", " self(\"=================== Модель ===================\")\n", " self(\"---------------- Архитектура -----------------\") \n", " self(\" %-*s %-*s %-*s %-*s\" % (10, \"\", 10, \"Type\", 8, \"Size\", 15, \"Dropout (keep p)\")) \n", " self(\" %-*s %-*s %-*s %-*s\" % (10, \"1 слой\", 10, \"{}x{} Conv\".format(params.conv1_k, params.conv1_k), 8, str(params.conv1_d), 15, str(params.conv1_p))) \n", " self(\" %-*s %-*s %-*s %-*s\" % (10, \"2 слой\", 10, \"{}x{} Conv\".format(params.conv2_k, params.conv2_k), 8, str(params.conv2_d), 15, str(params.conv2_p))) \n", " self(\" %-*s %-*s %-*s %-*s\" % (10, \"3 слой\", 10, \"{}x{} Conv\".format(params.conv3_k, params.conv3_k), 8, str(params.conv3_d), 15, str(params.conv3_p))) \n", " self(\" %-*s %-*s %-*s %-*s\" % (10, \"4 слой\", 10, \"FC\", 8, str(params.fc4_size), 15, str(params.fc4_p))) \n", " self(\"----------------- Параметры ------------------\")\n", " self(\" Learning rate: \" + (\"Enabled\" if params.learning_rate_decay else \"Disabled (rate = {})\".format(params.learning_rate)))\n", " self(\" L2 Regularization: \" + (\"Enabled (lambda = {})\".format(params.l2_lambda) if params.l2_reg_enabled else \"Disabled\"))\n", " self(\" Early stopping: \" + (\"Enabled (patience = {})\".format(params.early_stopping_patience) if params.early_stopping_enabled else \"Disabled\"))\n", " self(\" Keep training old model: \" + (\"Enabled\" if params.resume_training else \"Disabled\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Модель\n", "Объявим вспомогательные функции-обертки над Tensorflow для каждого типа слоя" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Полносвязный слой\n", "def fully_connected(input, size):\n", " weights = tf.get_variable( 'weights', \n", " shape = [input.get_shape()[1], size],\n", " initializer = tf.contrib.layers.xavier_initializer()\n", " )\n", " biases = tf.get_variable( 'biases',\n", " shape = [size],\n", " initializer = tf.constant_initializer(0.0)\n", " )\n", " return tf.matmul(input, weights) + biases\n", "\n", "def fully_connected_relu(input, size):\n", " return tf.nn.relu(fully_connected(input, size))\n", "\n", "# Сверточный слой\n", "def conv_relu(input, kernel_size, depth):\n", " weights = tf.get_variable( 'weights', \n", " shape = [kernel_size, kernel_size, input.get_shape()[3], depth],\n", " initializer = tf.contrib.layers.xavier_initializer()\n", " )\n", " biases = tf.get_variable( 'biases',\n", " shape = [depth],\n", " initializer = tf.constant_initializer(0.0)\n", " )\n", " conv = tf.nn.conv2d( input,\n", " weights,\n", " strides = [1, 1, 1, 1], padding = 'SAME')\n", " return tf.nn.relu(conv + biases)\n", "\n", "# Субдискретизирующий слой\n", "def pool(input, size):\n", " return tf.nn.max_pool( input, \n", " ksize = [1, size, size, 1], \n", " strides = [1, size, size, 1], \n", " padding = 'SAME'\n", " )\n", "\n", "# Построение архитектуры сети\n", "def model_pass(input, params, is_training):\n", "\n", " with tf.variable_scope('conv1'):\n", " conv1 = conv_relu(input, kernel_size = params.conv1_k, depth = params.conv1_d) \n", " with tf.variable_scope('pool1'): \n", " pool1 = pool(conv1, size = 2)\n", " pool1 = tf.cond(is_training, lambda: tf.nn.dropout(pool1, keep_prob = params.conv1_p), lambda: pool1)\n", " with tf.variable_scope('conv2'):\n", " conv2 = conv_relu(pool1, kernel_size = params.conv2_k, depth = params.conv2_d)\n", " with tf.variable_scope('pool2'):\n", " pool2 = pool(conv2, size = 2)\n", " pool2 = tf.cond(is_training, lambda: tf.nn.dropout(pool2, keep_prob = params.conv2_p), lambda: pool2)\n", " with tf.variable_scope('conv3'):\n", " conv3 = conv_relu(pool2, kernel_size = params.conv3_k, depth = params.conv3_d)\n", " with tf.variable_scope('pool3'):\n", " pool3 = pool(conv3, size = 2)\n", " pool3 = tf.cond(is_training, lambda: tf.nn.dropout(pool3, keep_prob = params.conv3_p), lambda: pool3)\n", " \n", " pool1 = pool(pool1, size = 4)\n", " shape = pool1.get_shape().as_list()\n", " pool1 = tf.reshape(pool1, [-1, shape[1] * shape[2] * shape[3]])\n", " \n", " pool2 = pool(pool2, size = 2)\n", " shape = pool2.get_shape().as_list()\n", " pool2 = tf.reshape(pool2, [-1, shape[1] * shape[2] * shape[3]]) \n", " \n", " shape = pool3.get_shape().as_list()\n", " pool3 = tf.reshape(pool3, [-1, shape[1] * shape[2] * shape[3]])\n", " \n", " flattened = tf.concat(1, [pool1, pool2, pool3])\n", " \n", " with tf.variable_scope('fc4'):\n", " fc4 = fully_connected_relu(flattened, size = params.fc4_size)\n", " fc4 = tf.cond(is_training, lambda: tf.nn.dropout(fc4, keep_prob = params.fc4_p), lambda: fc4)\n", " with tf.variable_scope('out'):\n", " logits = fully_connected(fc4, size = params.num_classes)\n", " return logits" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "В конце обучения нам необходимо будет вывести график зависимости функции потерь (loss) и доли правильных ответов (accuracy) от номера эпохи. Поэтому определим для этого две функции, которые и будут заниматься отрисовкой." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plot_curve(axis, params, train_column, valid_column, linewidth = 2, train_linestyle = \"b-\", valid_linestyle = \"g-\"):\n", " model_history = np.load(Paths(params).train_history_path + \".npz\")\n", " train_values = model_history[train_column]\n", " valid_values = model_history[valid_column]\n", " epochs = train_values.shape[0]\n", " x_axis = np.arange(epochs)\n", " axis.plot(x_axis[train_values > 0], train_values[train_values > 0], train_linestyle, linewidth=linewidth, label=\"train\")\n", " axis.plot(x_axis[valid_values > 0], valid_values[valid_values > 0], valid_linestyle, linewidth=linewidth, label=\"valid\")\n", " return epochs\n", "\n", "def plot_learning_curves(params):\n", " curves_figure = pyplot.figure(figsize = (10, 4))\n", " axis = curves_figure.add_subplot(1, 2, 1)\n", " epochs_plotted = plot_curve(axis, parameters, train_column = \"train_accuracy_history\", valid_column = \"valid_accuracy_history\")\n", "\n", " pyplot.grid()\n", " pyplot.legend()\n", " pyplot.xlabel(\"epoch\")\n", " pyplot.ylabel(\"accuracy\")\n", " pyplot.ylim(50., 115.)\n", " pyplot.xlim(0, epochs_plotted)\n", "\n", " axis = curves_figure.add_subplot(1, 2, 2)\n", " epochs_plotted = plot_curve(axis, parameters, train_column = \"train_loss_history\", valid_column = \"valid_loss_history\")\n", "\n", " pyplot.grid()\n", " pyplot.legend()\n", " pyplot.xlabel(\"epoch\")\n", " pyplot.ylabel(\"loss\")\n", " pyplot.ylim(0.0001, 10.)\n", " pyplot.xlim(0, epochs_plotted)\n", " pyplot.yscale(\"log\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ну и наконец-то метод, который и будет инициировать процесс обучения и тестирования." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nolearn.lasagne import BatchIterator\n", "\n", "\n", "def train_model(params, X_train, y_train, X_valid, y_valid, X_test, y_test):\n", " paths = Paths(params)\n", " log = ModelCloudLog(os.path.join(paths.root_path, \"logs\"))\n", " start = time.time()\n", " model_variable_scope = paths.var_scope\n", "\n", " log.log_parameters(params, y_train.shape[0], y_valid.shape[0], y_test.shape[0]) \n", " \n", " # Строим граф\n", " graph = tf.Graph()\n", " with graph.as_default():\n", " # Инициализируем входные данные\n", " tf_x_batch = tf.placeholder(tf.float32, shape = (None, params.image_size[0], params.image_size[1], 1))\n", " tf_y_batch = tf.placeholder(tf.float32, shape = (None, params.num_classes))\n", " is_training = tf.placeholder(tf.bool)\n", " current_epoch = tf.Variable(0, trainable=False)\n", "\n", " if params.learning_rate_decay:\n", " learning_rate = tf.train.exponential_decay(params.learning_rate, current_epoch, decay_steps = params.max_epochs, decay_rate = 0.01)\n", " else:\n", " learning_rate = params.learning_rate\n", " \n", " with tf.variable_scope(model_variable_scope):\n", " logits = model_pass(tf_x_batch, params, is_training)\n", " if params.l2_reg_enabled:\n", " with tf.variable_scope('fc4', reuse = True):\n", " l2_loss = tf.nn.l2_loss(tf.get_variable('weights'))\n", " else:\n", " l2_loss = 0\n", "\n", " predictions = tf.nn.softmax(logits)\n", " softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, tf_y_batch)\n", " loss = tf.reduce_mean(softmax_cross_entropy) + params.l2_lambda * l2_loss \n", "\n", " optimizer = tf.train.AdamOptimizer(\n", " learning_rate = learning_rate\n", " ).minimize(loss)\n", "\n", " with tf.Session(graph = graph) as session:\n", " session.run(tf.global_variables_initializer())\n", "\n", " # Данная функция позволит оценивать текущие параметы модели, такие как accuracy и loss\n", " def get_accuracy_and_loss_in_batches(X, y):\n", " p = []\n", " sce = []\n", " batch_iterator = BatchIterator(batch_size = 128)\n", " for x_batch, y_batch in batch_iterator(X, y):\n", " [p_batch, sce_batch] = session.run([predictions, softmax_cross_entropy], feed_dict = {\n", " tf_x_batch : x_batch, \n", " tf_y_batch : y_batch,\n", " is_training : False\n", " }\n", " )\n", " p.extend(p_batch)\n", " sce.extend(sce_batch)\n", " p = np.array(p)\n", " sce = np.array(sce)\n", " accuracy = 100.0 * np.sum(np.argmax(p, 1) == np.argmax(y, 1)) / p.shape[0]\n", " loss = np.mean(sce)\n", " return (accuracy, loss)\n", " \n", " # В случае, если мы захотим продолжить обучении ранее обученной модели, мы перезапустим сессию\n", " if params.resume_training: \n", " try:\n", " tf.train.Saver().restore(session, paths.model_path)\n", " except Exception as e:\n", " log(\"Неудалось запустить обучение предобученной модели: файл не найден.\")\n", " pass\n", " \n", " saver = tf.train.Saver()\n", " early_stopping = EarlyStopping(tf.train.Saver(), session, patience = params.early_stopping_patience, minimize = True)\n", " train_loss_history = np.empty([0], dtype = np.float32)\n", " train_accuracy_history = np.empty([0], dtype = np.float32)\n", " valid_loss_history = np.empty([0], dtype = np.float32)\n", " valid_accuracy_history = np.empty([0], dtype = np.float32)\n", " if params.max_epochs > 0:\n", " log(\"================= Обучение ==================\")\n", " else:\n", " log(\"=============== Тестирование ================\") \n", " log(\" Timestamp: \" + get_time_hhmmss())\n", " log.sync()\n", " \n", " for epoch in range(params.max_epochs):\n", " current_epoch = epoch\n", " batch_iterator = BatchIterator(batch_size = params.batch_size, shuffle = True)\n", " for x_batch, y_batch in batch_iterator(X_train, y_train):\n", " session.run([optimizer], feed_dict = {\n", " tf_x_batch : x_batch, \n", " tf_y_batch : y_batch,\n", " is_training : True\n", " }\n", " )\n", "\n", " # Когда проходит эпоха, выводим логируем значения loss и accuracy для тестовой и валидационной выборки\n", " if (epoch % params.log_epoch == 0):\n", " valid_accuracy, valid_loss = get_accuracy_and_loss_in_batches(X_valid, y_valid)\n", " train_accuracy, train_loss = get_accuracy_and_loss_in_batches(X_train, y_train)\n", "\n", " if (epoch % params.print_epoch == 0):\n", " log(\"-------------- %4d/%d Эпоха --------------\" % (epoch, params.max_epochs))\n", " log(\" Train loss: %.8f, accuracy: %.2f%%\" % (train_loss, train_accuracy))\n", " log(\"Validation loss: %.8f, accuracy: %.2f%%\" % (valid_loss, valid_accuracy))\n", " log(\" Best loss: %.8f at epoch %d\" % (early_stopping.best_monitored_value, early_stopping.best_monitored_epoch))\n", " log(\" Elapsed time: \" + get_time_hhmmss(start))\n", " log(\" Timestamp: \" + get_time_hhmmss())\n", " log.sync()\n", " else:\n", " valid_loss = 0.\n", " valid_accuracy = 0.\n", " train_loss = 0.\n", " train_accuracy = 0.\n", " \n", " valid_loss_history = np.append(valid_loss_history, [valid_loss])\n", " valid_accuracy_history = np.append(valid_accuracy_history, [valid_accuracy])\n", " train_loss_history = np.append(train_loss_history, [train_loss])\n", " train_accuracy_history = np.append(train_accuracy_history, [train_accuracy])\n", " \n", " if params.early_stopping_enabled:\n", " if valid_loss == 0:\n", " _, valid_loss = get_accuracy_and_loss_in_batches(X_valid, y_valid)\n", " if early_stopping(valid_loss, epoch): \n", " log(\"Early stopping.\\nНаименьшее значение функции потерь: {:.8f} на {} эпохе.\".format(\n", " early_stopping.best_monitored_value, early_stopping.best_monitored_epoch\n", " ))\n", " break\n", "\n", " # И тестируем модель на тестовой выборке\n", " test_accuracy, test_loss = get_accuracy_and_loss_in_batches(X_test, y_test)\n", " valid_accuracy, valid_loss = get_accuracy_and_loss_in_batches(X_valid, y_valid)\n", " log(\"=============================================\")\n", " log(\" Valid loss: %.8f, accuracy = %.2f%%)\" % (valid_loss, valid_accuracy)) \n", " log(\" Test loss: %.8f, accuracy = %.2f%%)\" % (test_loss, test_accuracy)) \n", " log(\" Total time: \" + get_time_hhmmss(start))\n", " log(\" Timestamp: \" + get_time_hhmmss())\n", "\n", " # Сохраняем значения весов для использования в будущем\n", " saved_model_path = saver.save(session, paths.model_path)\n", " log(\"Файл модели: \" + saved_model_path)\n", " np.savez(paths.train_history_path, train_loss_history = train_loss_history, train_accuracy_history = train_accuracy_history, valid_loss_history = valid_loss_history, valid_accuracy_history = valid_accuracy_history)\n", " log(\"Файл истории обучения: \" + paths.train_history_path)\n", " log.sync(notify=True, message=\"Обучение окончено с *%.2f%%* accuracy на тестовой выборке (loss = *%.6f*).\" % (test_accuracy, test_loss))\n", " \n", " plot_learning_curves(params)\n", " log.add_plot(notify=True, caption=\"Learning curves\")\n", " \n", " pyplot.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "parameters = Parameters(\n", " # Данные об изображениях\n", " num_classes = n_classes,\n", " image_size = (32, 32),\n", " # Параметры обучения\n", " batch_size = 256,\n", " max_epochs = 60,\n", " log_epoch = 1,\n", " print_epoch = 1,\n", " # Оптимизации\n", " learning_rate_decay = True,\n", " learning_rate = 0.0001,\n", " l2_reg_enabled = True,\n", " l2_lambda = 0.0001,\n", " early_stopping_enabled = True,\n", " early_stopping_patience = 50,\n", " resume_training = False,\n", " # Архитектуры слоёв\n", " conv1_k = 5, conv1_d = 32, conv1_p = 0.9,\n", " conv2_k = 5, conv2_d = 64, conv2_p = 0.8,\n", " conv3_k = 5, conv3_d = 128, conv3_p = 0.7,\n", " fc4_size = 1024, fc4_p = 0.5\n", ")\n", "\n", "X_train_preprocessed, X_valid_preprocessed, Y_train_preprocessed, Y_valid_preprocessed = train_test_split(X_train_preprocessed, \n", " Y_train_preprocessed,\n", " test_size = 0.25)\n", "\n", "train_model(parameters, \n", " X_train_preprocessed, Y_train_preprocessed, \n", " X_valid_preprocessed, Y_valid_preprocessed, \n", " X_test_preprocessed,Y_test_preprocessed)" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "Время обучения на моей машине (mb pro, core i5, 8Gb, Intel Iris 1536 МБ) составило почти полтора часа. На полной выборке – 66 часов.\n", "На тестовой выборке доля правильных ответов составила 99,21 % – приличный результат." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Давайте в конце еще посмотрим на изображения, которые сеть не смогла правильно классифицировать." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def get_top_k_predictions(params, X, k = 5):\n", " paths = Paths(params)\n", " \n", " graph = tf.Graph()\n", " with graph.as_default():\n", " tf_x = tf.placeholder(tf.float32, shape = (None, params.image_size[0], params.image_size[1], 1))\n", " is_training = tf.constant(False)\n", " with tf.variable_scope(paths.var_scope):\n", " predictions = tf.nn.softmax(model_pass(tf_x, params, is_training))\n", " top_k_predictions = tf.nn.top_k(predictions, k)\n", "\n", " with tf.Session(graph = graph) as session:\n", " session.run(tf.global_variables_initializer())\n", " tf.train.Saver().restore(session, paths.model_path)\n", " [p] = session.run([top_k_predictions], feed_dict = {\n", " tf_x : X\n", " }\n", " )\n", " return np.array(p)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictions = get_top_k_predictions(parameters, X_test_preprocessed)\n", "\n", "predictions = predictions[1][:, np.argmax(predictions[0], 1)][:, 0].astype(int)\n", "labels = np.argmax(Y_test_preprocessed, 1)\n", "\n", "incorrectly_predicted = X_test[predictions != labels]\n", "print(\"Кол-во неправильно распознанных изображений:\", len(incorrectly_predicted))\n", "print(\"Оригинальные:\")\n", "fig = pyplot.figure(figsize=(20, 20))\n", "fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n", "for i in range(incorrectly_predicted.shape[0]):\n", " ax = fig.add_subplot(15, 15, i + 1, xticks=[], yticks=[])\n", " ax.imshow(incorrectly_predicted[i])\n", "pyplot.show()\n", "\n", "print(\"Обработанные:\")\n", "incorrectly_predicted = X_test_preprocessed[predictions != labels]\n", "fig = pyplot.figure(figsize=(20, 20))\n", "fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)\n", "for i in range(incorrectly_predicted.shape[0]):\n", " ax = fig.add_subplot(15, 15, i + 1, xticks=[], yticks=[])\n", " ax.imshow(incorrectly_predicted[i].reshape(32, 32), cmap='gray')\n", "pyplot.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "На этом всё. В этом туториале мы построили сверточную нейронную сеть для распознавания дорожных знаков, которая показала достаточно хорошие результаты классификации на не очень большой выборке данных, а также посмотрели на изображения, которые сеть не смогла распознать.\n", "Спасибо за внимание." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Полезные ссылки" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- [Глава про СНН из книги Deep Learning](http://neuralnetworksanddeeplearning.com/chap6.html#introducing_convolutional_networks)\n", "- [CS231n: Convolutional Neural Networks for Visual Recognition. ](http://cs231n.github.io/convolutional-networks/)\n", "- [Статья Яна Лекуна о распознавании дорожных знаков](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 2 }