{ "cells": [ { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "# SETI CNN using TF and Binary DS" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "import requests\n", "import json\n", "#import ibmseti\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "import tensorflow as tf\n", "import pickle\n", "import time\n", "#!sudo pip install sklearn\n", "import os\n", "from sklearn.metrics import confusion_matrix\n", "from sklearn import metrics" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Set your team folder" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "### SET YOUR TEAM NAME HERE! Use this folder to save intermediate results\n", "team_name = 'Saeed_team'\n", "mydatafolder = os.path.join( os.environ['PWD'], team_name ) #Change my_data_folder to your team name\n", "if os.path.exists(mydatafolder) is False:\n", " os.makedirs(mydatafolder)\n", "print mydatafolder" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Import dataset reader\n", "The following cell will load a python code to read the SETI dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "!wget --output-document SETI.zip https://ibm.box.com/shared/static/jhqdhcblhua5dx2t7ixwm88okitjrl6l.zip\n", "!unzip -o SETI.zip\n", "import SETI" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Download data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "ds_directory = mydatafolder + '/SETI/SETI_ds_64x128/'" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "print os.popen(\"ls -lrt \"+ ds_directory).read() # to verify" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Load data SETI" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "#from tensorflow.examples.tutorials.mnist import input_data\n", "#dataset = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\n", "dataset = SETI.read_data_sets(ds_directory, one_hot=True, validation_size=0)\n", "dataset.train.images.shape" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "## Network Parameters" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "# Parameters\n", "decay_rate=0.96\n", "decay_steps=1000\n", "learning_rate = 0.005\n", "training_epochs = 200\n", "batch_size = 50\n", "display_step = 100\n", "\n", "#check point directory\n", "chk_directory = mydatafolder+'/save/'\n", "checkpoint_path = chk_directory+'model.ckpt'\n", "\n", "\n", "n_classes = 4 # number of possible classifications for the problem\n", "dropout = 0.50 # Dropout, probability to keep units\n", "\n", "height = 64 # height of the image in pixels \n", "width = 128 # width of the image in pixels \n", "n_input = width * height # number of pixels in one image \n" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Inputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "x = tf.placeholder(tf.float32, shape=[None, n_input])\n", "y_ = tf.placeholder(tf.float32, shape=[None, n_classes])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "x_image = tf.reshape(x, [-1,height,width,1]) \n", "x_image" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Convolutional Layer 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1))\n", "b_conv1 = tf.Variable(tf.constant(0.1, shape=[32])) # need 32 biases for 32 outputs\n", "convolve1 = tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1\n", "h_conv1 = tf.nn.relu(convolve1)\n", "conv1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2\n", "conv1" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Convolutional Layer 2" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1))\n", "b_conv2 = tf.Variable(tf.constant(0.1, shape=[64])) #need 64 biases for 64 outputs\n", "convolve2= tf.nn.conv2d(conv1, W_conv2, strides=[1, 1, 1, 1], padding='SAME')+ b_conv2\n", "h_conv2 = tf.nn.relu(convolve2)\n", "conv2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 4, 4, 1], padding='SAME') #max_pool_2x2\n", "conv2" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Convolutional Layer 3" ] }, { "cell_type": "raw", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "W_conv3 = tf.Variable(tf.truncated_normal([5, 5, 64, 128], stddev=0.1))\n", "b_conv3 = tf.Variable(tf.constant(0.1, shape=[128])) #need 64 biases for 64 outputs\n", "convolve3= tf.nn.conv2d(conv2, W_conv3, strides=[1, 1, 1, 1], padding='SAME')+ b_conv3\n", "h_conv3 = tf.nn.relu(convolve3)\n", "conv3 = tf.nn.max_pool(h_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2\n", "conv3" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Convolutional Layer 4" ] }, { "cell_type": "raw", "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "W_conv4 = tf.Variable(tf.truncated_normal([5, 5, 128, 256], stddev=0.1))\n", "b_conv4 = tf.Variable(tf.constant(0.1, shape=[256])) #need 64 biases for 64 outputs\n", "convolve4= tf.nn.conv2d(conv3, W_conv4, strides=[1, 1, 1, 1], padding='SAME')+ b_conv4\n", "h_conv4 = tf.nn.relu(convolve4)\n", "conv4 = tf.nn.max_pool(h_conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Fully Connected Layer 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "input_layer = conv2\n", "dim = input_layer.get_shape().as_list()\n", "dim" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "dims= dim[1]*dim[2]*dim[3]\n", "nodes1 = 1024\n", "prv_layer_matrix = tf.reshape(input_layer, [-1, dims])\n", "W_fc1 = tf.Variable(tf.truncated_normal([dims, nodes1], stddev=0.1))\n", "b_fc1 = tf.Variable(tf.constant(0.1, shape=[nodes1])) # need 1024 biases for 1024 outputs\n", "h_fcl1 = tf.matmul(prv_layer_matrix, W_fc1) + b_fc1\n", "fc_layer1 = tf.nn.relu(h_fcl1) # ???\n", "fc_layer1\n" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Droupout 1" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "keep_prob = tf.placeholder(tf.float32)\n", "layer_drop1 = tf.nn.dropout(fc_layer1, keep_prob)" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Fully Connected Layer 2" ] }, { "cell_type": "raw", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "nodes2 = 256\n", "W_fc2 = tf.Variable(tf.truncated_normal([layer_drop1.get_shape().as_list()[1], nodes2], stddev=0.1))\n", "b_fc2 = tf.Variable(tf.constant(0.1, shape=[nodes2])) \n", "h_fcl2 = tf.matmul(layer_drop1, W_fc2) + b_fc2\n", "fc_layer2 = tf.nn.relu(h_fcl2) # ???\n", "fc_layer2" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Droupout 2" ] }, { "cell_type": "raw", "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "layer_drop2 = tf.nn.dropout(fc_layer2, keep_prob)" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Readout Layer" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "W_fc = tf.Variable(tf.truncated_normal([nodes1, n_classes], stddev=0.1)) #1024 neurons\n", "b_fc = tf.Variable(tf.constant(0.1, shape=[n_classes])) # 10 possibilities for classes [0,1,2,3]\n", "fc = tf.matmul(layer_drop1, W_fc) + b_fc\n", "y_CNN= tf.nn.softmax(fc)" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Loss function" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_CNN, labels=y_))" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Training\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "# Create a variable to track the global step.\n", "global_step = tf.Variable(0, trainable=False)\n", "\n", "# create learning_decay\n", "lr = tf.train.exponential_decay( learning_rate,\n", " global_step,\n", " decay_steps,\n", " decay_rate, staircase=True )" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "# Use the optimizer to apply the gradients that minimize the loss\n", "# (and also increment the global step counter) as a single training step.\n", "optimizer = tf.train.GradientDescentOptimizer(lr)\n", "\n", "train_op = optimizer.minimize(cross_entropy, global_step=global_step)\n", "#train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy)" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "#### Evaluation" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "correct_prediction = tf.equal(tf.argmax(y_CNN,1), tf.argmax(y_,1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Create checkpoint directory" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "directory = os.path.dirname(chk_directory)\n", "try:\n", " os.stat(directory)\n", " ckpt = tf.train.get_checkpoint_state(chk_directory)\n", " print ckpt\n", "except:\n", " os.mkdir(directory) " ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "## Training" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "# Initializing the variables\n", "init = tf.global_variables_initializer()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "loss_values = []\n", "with tf.Session() as sess:\n", "\n", " \n", " X_test = dataset.test.images\n", " y_test = dataset.test.labels\n", " sess.run(init)\n", " saver = tf.train.Saver(tf.global_variables())\n", " \n", " # load previously trained model if appilcable\n", " ckpt = tf.train.get_checkpoint_state(chk_directory)\n", " if ckpt:\n", " print \"loading model: \",ckpt.model_checkpoint_path\n", " #saver.restore(sess, ckpt.model_checkpoint_path)\n", " \n", " \n", " #step = 0\n", " num_examples = dataset.train.num_examples\n", " # Training cycle\n", " for epoch in range(training_epochs):\n", " avg_loss = 0.\n", " avg_accuracy = 0.\n", " #dataset.shuffle_data()\n", " total_batch = int(num_examples / batch_size)\n", "\n", " # Loop over all batches\n", " start = time.time()\n", " for step in range(total_batch):\n", " x_batch, y_batch = dataset.train.next_batch(batch_size,shuffle=True)\n", " train_op.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: dropout})\n", " loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.})\n", " \n", " avg_loss += loss / total_batch\n", " avg_accuracy += acc / total_batch\n", " if step % display_step == 1000:\n", "\n", " \n", " # Calculate batch loss and accuracy\n", " loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.})\n", " #train_accuracy = accuracy.eval(feed_dict={x:x_batch, y_: y_batch, keep_prob: 0.5})\n", "\n", " test_accuracy = sess.run(accuracy, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})\n", "\n", " print(\"Iter \" + str(step) + \\\n", " \", Minibatch Loss= \" + \"{:.6f}\".format(loss) + \\\n", " \", Training Accuracy= \" + \"{:.5f}\".format(acc) + \\\n", " \", Test Accuracy= \" + \"{:.5f}\".format(test_accuracy) )\n", " \n", " # save model every 1 epochs\n", " if epoch >= 0 and epoch % 1 == 0:\n", " # Save model\n", " #print (\"model saved to {}\".format(checkpoint_path))\n", " #saver.save(sess, checkpoint_path, global_step = epoch)\n", " end = time.time()\n", " plr = sess.run(lr)\n", " loss_values.append(avg_loss)\n", " #print(sess.run(tf.train.global_step()))\n", " print \"Epoch:\", '%04d' % (epoch+1) , \", Epoch time=\" , \"{:.5f}\".format(end - start) , \", lr=\", \"{:.9f}\".format(plr), \", cost=\", \"{:.9f}\".format(avg_loss) ,\", Acc=\", \"{:.9f}\".format(avg_accuracy)\n", "\n", " print(\"Optimization Finished!\")\n", " print (\"model saved to {}\".format(checkpoint_path))\n", " saver.save(sess, checkpoint_path, global_step = (epoch+1)*step)\n", "\n", " \n", " \n", " # Calculate accuracy for test images\n", " #print(\"Testing Accuracy:\", sess.run(accuracy, feed_dict={x: X_test[0:30], y_: y_test[0:30], keep_prob: 1.}))\n", " \n", " # Find the labels of test set\n", " y_pred_lb = sess.run(tf.argmax(y_CNN,1), feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})\n", " y_pred = sess.run(y_CNN, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.})\n", " \n", " # lets save kernels\n", " kernels_l1 = sess.run(tf.reshape(tf.transpose(W_conv1, perm=[2, 3, 0, 1]),[32,-1]))\n", " kernels_l2 = sess.run(tf.reshape(tf.transpose(W_conv2, perm=[2, 3, 0, 1]),[32*64,-1]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "%matplotlib inline\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "plt.plot([np.mean(loss_values[i:i+5]) for i in range(len(loss_values))])\n", "plt.show()\n" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "## Evaluation" ] }, { "cell_type": "markdown", "metadata": { "deletable": true, "editable": true }, "source": [ "Accuracy is depend on the number of epoch that you set in partametrs part." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "y_ = np.argmax(y_test[0:100],1) # ground truth\n", "print metrics.classification_report(y_true= y_, y_pred= y_pred_lb)\n", "print metrics.confusion_matrix(y_true= y_, y_pred= y_pred_lb)\n", "print(\"Classification accuracy: %0.6f\" % metrics.accuracy_score(y_true= y_, y_pred= y_pred_lb) )\n", "print(\"Log Loss: %0.6f\" % metrics.log_loss(y_true= y_, y_pred= y_pred, labels=range(4)) )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Generate CSV file for Scoreboard\n", "\n", "Here's an example of what the CSV file should look like for submission to the scoreboard. Although, in this case, we only have 4 classes instead of 7.\n", "\n", "#### NOTE: This uses the test set created in Step_5c, which only contain the BASIC4 test data set. The code challenge and hackathon will be based on the Primary Data Set which contains 7 signal classes, and different test set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "my_output_results = mydatafolder + '/' + 'DL_scores.csv'\n", "with open(my_output_results, 'w') as csvfile:\n", " np.savetxt(my_output_results, y_pred, delimiter=\",\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print os.popen(\"ls -lrt \"+ mydatafolder).read() # to verify" ] }, { "cell_type": "markdown", "metadata": { "button": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "### Viz" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "!wget --output-document utils1.py http://deeplearning.net/tutorial/code/utils.py\n", "import utils1\n", "from utils1 import tile_raster_images" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "#from utils import tile_raster_images\n", "import matplotlib.pyplot as plt\n", "from PIL import Image\n", "%matplotlib inline\n", "image = Image.fromarray(tile_raster_images(kernels_l1, img_shape=(5, 5) ,tile_shape=(4, 8), tile_spacing=(1, 1)))\n", "### Plot image\n", "plt.rcParams['figure.figsize'] = (18.0, 18.0)\n", "imgplot = plt.imshow(image)\n", "imgplot.set_cmap('gray') " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "image = Image.fromarray(tile_raster_images(kernels_l2, img_shape=(5, 5) ,tile_shape=(4, 12), tile_spacing=(1, 1)))\n", "### Plot image\n", "plt.rcParams['figure.figsize'] = (18.0, 18.0)\n", "imgplot = plt.imshow(image)\n", "imgplot.set_cmap('gray') " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "import numpy as np\n", "plt.rcParams['figure.figsize'] = (5.0, 5.0)\n", "sampleimage1 = X_test[3]\n", "plt.imshow(np.reshape(sampleimage1,[64,128]), cmap=\"gray\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": false, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [ "# Launch the graph\n", "with tf.Session() as sess:\n", " sess.run(init)\n", " saver = tf.train.Saver(tf.all_variables())\n", " \n", " # load previously trained model if appilcable\n", " ckpt = tf.train.get_checkpoint_state(chk_directory)\n", " if ckpt:\n", " print \"loading model: \",ckpt.model_checkpoint_path\n", " saver.restore(sess, ckpt.model_checkpoint_path)\n", " ActivatedUnits1 = sess.run(convolve1,feed_dict={x:np.reshape(sampleimage1,[1,64*128],order='F'),keep_prob:1.0})\n", " plt.figure(1, figsize=(20,20))\n", " n_columns = 3\n", " n_rows = 3\n", " for i in range(9):\n", " plt.subplot(n_rows, n_columns, i+1)\n", " plt.title('Filter ' + str(i))\n", " plt.imshow(ActivatedUnits1[0,:,:,i], interpolation=\"nearest\", cmap=\"gray\")" ] }, { "cell_type": "markdown", "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "source": [ "\n", "

Authors:

\n", "
\n", "
\"Saeed
\n", "

Saeed Aghabozorgi

\n", "

Saeed Aghabozorgi, PhD is Sr. Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.

\n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "button": false, "collapsed": true, "deletable": true, "editable": true, "new_sheet": false, "run_control": { "read_only": false } }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.12" } }, "nbformat": 4, "nbformat_minor": 1 }