{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Part 4 of 4 - Real-Time Facial Expression Recognition\n", "\n", "#### | Final Capstone Project for the Diploma Program in Data Science | BrainStation Vancouver |\n", "\n", "| Arash Tavassoli | May-June 2019 |\n", "\n", "---\n", "This is the fourth notebook in a series of four:\n", "* **Part 1 - Exploratory Data Analysis**\n", "\n", "* **Part 2 - Data Preprocessing**\n", "\n", "* **Part 3 - Model Training and Analysis**\n", "\n", "* **Part 4 - Real-Time Facial Expression Recognition**\n", "\n", "### What to expect in this notebook:\n", "\n", "A Real-time Facial Expression Recognizer that will use the trained models from Part 3 to do real-time expression classification from the video captured by the computer's webcam\n", "\n", "---\n", "## Building a Real-Time Facial Expression Recognizer with OpenCV\n", "\n", "In this part we use the trained model from Part 3 and pair its prediction capabilities with OpenCV to build a simple, but fun, tool to take the live video from computer's webcam and do real-time prediction on the detected face:\n", "\n", "\n", "\n", "Let's start with building a function that:\n", "1. Inputs the trained model and list of expected classes \n", "2. Starts capturing the video from computer's webcam\n", "3. Detects the largest face on each frame (with min size limit of 1/20 of the frame's smaller dimension)\n", "4. Predicts the facial expression using the trained model\n", "5. Annotate the frame with a box around the face and the predicted expression (and probabilities for each class)\n", "6. Releases the frame to the screen" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Using TensorFlow backend.\n" ] } ], "source": [ "import cv2\n", "import numpy as np\n", "from keras import models" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "root_dir = '/Users/Arash/Google Drive/Colab Notebooks/BrainStation Capstone - Colab'" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# A function to get the trained model and start a real-time \n", "# facial expression recognizer using computer's webcam (OpenCV):\n", "\n", "def RealTimeExpressionRecognizer(model, classes):\n", " # Load the CascadeClassifier:\n", " face_cascade = cv2.CascadeClassifier('Data/haarcascade_frontalface_alt.xml')\n", "\n", " video = cv2.VideoCapture(0)\n", " \n", " # Setting color and font params:\n", " cv_color = (33, 83, 244)\n", " cv_font = cv2.FONT_HERSHEY_SIMPLEX\n", "\n", " while True:\n", " _, frame = video.read()\n", "\n", " # Converting the frame to grayscale:\n", " im = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n", "\n", " # Detecting faces in frame (with min size limit of 1/20 of the frame's smaller dimension):\n", " min_acceptable = int(min(frame.shape)/20)\n", " detected_faces = face_cascade.detectMultiScale(im, minSize = (min_acceptable, min_acceptable))\n", "\n", " # Checking if there is no face: \n", " if detected_faces is ():\n", " \n", " # Text box to instruct the user of how to quit the live stream:\n", " cv2.putText(img = frame, \n", " text = 'Press Q to quit', \n", " org = (10, 20), \n", " fontFace = cv_font, fontScale = 0.7, thickness = 1,\n", " color = (255,255,255))\n", " \n", " # Text box for when there is no face detected in the frame:\n", " no_face_text = \"I'm waiting! Show me your smile! :)\"\n", " textsize = cv2.getTextSize(no_face_text, cv_font, 1, 2)[0]\n", " textX = int((frame.shape[1] - textsize[0]) / 2)\n", " textY = int((frame.shape[0] + textsize[1]) / 2)\n", " cv2.putText(img = frame, \n", " text = no_face_text, \n", " org = (textX, textY), \n", " fontFace = cv_font, fontScale = 1, thickness = 2,\n", " color = (255,255,255))\n", " else:\n", " \n", " # Picking the largest face, cropping, resizing and scaling it:\n", " x, y, width, height = detected_faces[np.argmax(detected_faces[:,2])]\n", " im = im[y:y + height, x:x + width]\n", " im = cv2.resize(im, (100, 100))\n", " im = im/255\n", "\n", " # preparing for model prediction, feeding to model:\n", " img_array = np.array(im)\n", " img_array = img_array.reshape(1,100,100,1)\n", " prediction = model.predict(img_array)[0]\n", "\n", " # Drawing the rectangle and inserting texts:\n", " cv2.rectangle(frame, (x, y), (x + width, y + height), cv_color, 3)\n", " \n", " # Text box to instruct the user of how to quit the live stream:\n", " cv2.putText(img = frame, \n", " text = 'Press Q to quit', \n", " org = (10, 20), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1,\n", " color = (255,255,255))\n", " \n", " # Adding prediction probabilities to the frame:\n", " cv2.putText(img = frame, \n", " text = f'{classes[0]}: {int(prediction[0]*100)}%', \n", " org = (x + width + 10, y + 10), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1, \n", " color = cv_color)\n", " cv2.putText(img = frame, \n", " text = f'{classes[1]}: {int(prediction[1]*100)}%', \n", " org = (x + width + 10, y + 30), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1, \n", " color = cv_color)\n", " cv2.putText(img = frame, \n", " text = f'{classes[2]}: {int(prediction[2]*100)}%', \n", " org = (x + width + 10, y + 50), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1, \n", " color = cv_color)\n", " if len(classes) == 5:\n", " cv2.putText(img = frame, \n", " text = f'{classes[3]}: {int(prediction[3]*100)}%', \n", " org = (x + width + 10, y + 70), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1, \n", " color = cv_color)\n", " cv2.putText(img = frame, \n", " text = f'{classes[4]}: {int(prediction[4]*100)}%', \n", " org = (x + width + 10, y + 90), \n", " fontFace = cv_font, fontScale = 0.5, thickness = 1, \n", " color = cv_color)\n", " \n", " # Adding the predicted expression:\n", " cv2.putText(img = frame, \n", " text = f'You are {classes[np.argmax(prediction)]}', \n", " org = (x, y - 20), \n", " fontFace = cv_font, fontScale = 1, thickness = 2,\n", " color = cv_color)\n", "\n", " # Showing the frame with annotations:\n", " cv2.imshow(\"Real-Time Facial Expression Recognizer\", frame)\n", " \n", " # Allowing for quit using Q key:\n", " key=cv2.waitKey(1)\n", " if key == ord('q'):\n", " break\n", " video.release()\n", " cv2.destroyAllWindows()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All we need to do now is to load the trained models and feed them into the `RealTimeExpressionRecognizer` function:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Model with 3 Classes (Happy, Sad and Surprised):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Loading the trained model:\n", "model = models.load_model(root_dir + '/Models/9/model.h5')\n", "\n", "# List of expected classes:\n", "classes = ['Happy', 'Sad', 'Surprised']\n", "\n", "RealTimeExpressionRecognizer(model, classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Model with 5 Classes (Neutral, Happy, Sad, Surprised and Angry):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /Users/Arash/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Colocations handled automatically by placer.\n", "WARNING:tensorflow:From /Users/Arash/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n", "WARNING:tensorflow:From /Users/Arash/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.cast instead.\n" ] } ], "source": [ "# Loading the trained model:\n", "model = models.load_model(root_dir + '/Models/11/model.h5')\n", "\n", "# List of expected classes:\n", "classes = ['Neutral', 'Happy', 'Sad', 'Surprised', 'Angry']\n", "\n", "RealTimeExpressionRecognizer(model, classes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Heads-up**: Due to some compatibility issues between Jupyter Notebook and OpenCV, it is recommended to run this code as a stand-alone `.py` file on the terminal or an IDE such as PyCharm. Running on Jupyter Notebook is expected to work but you may not be able to close the capturing screen using the Q key (in such case you will need to force quit and restart the Python kernel)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.2" } }, "nbformat": 4, "nbformat_minor": 2 }