{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Face Recognition\n", "\n", "In this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). \n", "\n", "Face recognition problems commonly fall into two categories: \n", "\n", "- **Face Verification** - \"is this the claimed person?\". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. \n", "- **Face Recognition** - \"who is this person?\". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. \n", "\n", "FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.\n", " \n", "**In this assignment, you will:**\n", "- Implement the triplet loss function\n", "- Use a pretrained model to map face images into 128-dimensional encodings\n", "- Use these encodings to perform face verification and face recognition\n", "\n", "#### Channels-first notation\n", "\n", "* In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **\"channels first\"** convention, as opposed to the \"channels last\" convention used in lecture and previous programming assignments. \n", "* In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. \n", "* Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Updates\n", "\n", "#### If you were working on the notebook before this update...\n", "* The current notebook is version \"3a\".\n", "* You can find your original work saved in the notebook with the previous version name (\"v3\") \n", "* To view the file directory, go to the menu \"File->Open\", and this will open a new tab that shows the file directory.\n", "\n", "#### List of updates\n", "* `triplet_loss`: Additional Hints added.\n", "* `verify`: Hints added.\n", "* `who_is_it`: corrected hints given in the comments.\n", "* Spelling and formatting updates for easier reading.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Load packages\n", "Let's load the required packages. " ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The autoreload extension is already loaded. To reload it, use:\n", " %reload_ext autoreload\n" ] } ], "source": [ "from keras.models import Sequential\n", "from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate\n", "from keras.models import Model\n", "from keras.layers.normalization import BatchNormalization\n", "from keras.layers.pooling import MaxPooling2D, AveragePooling2D\n", "from keras.layers.merge import Concatenate\n", "from keras.layers.core import Lambda, Flatten, Dense\n", "from keras.initializers import glorot_uniform\n", "from keras.engine.topology import Layer\n", "from keras import backend as K\n", "K.set_image_data_format('channels_first')\n", "import cv2\n", "import os\n", "import numpy as np\n", "from numpy import genfromtxt\n", "import pandas as pd\n", "import tensorflow as tf\n", "from fr_utils import *\n", "from inception_blocks_v2 import *\n", "\n", "%matplotlib inline\n", "%load_ext autoreload\n", "%autoreload 2\n", "\n", "np.set_printoptions(threshold=np.nan)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 0 - Naive Face Verification\n", "\n", "In Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! \n", "\n", "\n", "
**Figure 1**
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. \n", "* You'll see that rather than using the raw image, you can learn an encoding, $f(img)$. \n", "* By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1 - Encoding face images into a 128-dimensional vector \n", "\n", "### 1.1 - Using a ConvNet to compute encodings\n", "\n", "The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to \"File->Open...\" at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The key things you need to know are:\n", "\n", "- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ \n", "- It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector\n", "\n", "Run the cell below to create the model for face images." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "FRmodel = faceRecoModel(input_shape=(3, 96, 96))" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total Params: 3743280\n" ] } ], "source": [ "print(\"Total Params:\", FRmodel.count_params())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "** Expected Output **\n", "\n", "
\n", "Total Params: 3743280\n", "
\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows:\n", "\n", "\n", "
**Figure 2**:
By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same person
\n", "\n", "So, an encoding is a good one if: \n", "- The encodings of two images of the same person are quite similar to each other. \n", "- The encodings of two images of different persons are very different.\n", "\n", "The triplet loss function formalizes this, and tries to \"push\" the encodings of two images of the same person (Anchor and Positive) closer together, while \"pulling\" the encodings of two images of different persons (Anchor, Negative) further apart. \n", "\n", "\n", "
\n", "
**Figure 3**:
In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N)
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### 1.2 - The Triplet Loss\n", "\n", "For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.\n", "\n", "\n", "\n", "\n", "\n", "Training will use triplets of images $(A, P, N)$: \n", "\n", "- A is an \"Anchor\" image--a picture of a person. \n", "- P is a \"Positive\" image--a picture of the same person as the Anchor image.\n", "- N is a \"Negative\" image--a picture of a different person than the Anchor image.\n", "\n", "These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. \n", "\n", "You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\\alpha$:\n", "\n", "$$\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 + \\alpha < \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$$\n", "\n", "You would thus like to minimize the following \"triplet cost\":\n", "\n", "$$\\mathcal{J} = \\sum^{m}_{i=1} \\large[ \\small \\underbrace{\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2}_\\text{(1)} - \\underbrace{\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2}_\\text{(2)} + \\alpha \\large ] \\small_+ \\tag{3}$$\n", "\n", "Here, we are using the notation \"$[z]_+$\" to denote $max(z,0)$. \n", "\n", "Notes:\n", "- The term (1) is the squared distance between the anchor \"A\" and the positive \"P\" for a given triplet; you want this to be small. \n", "- The term (2) is the squared distance between the anchor \"A\" and the negative \"N\" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.\n", "- $\\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\\alpha = 0.2$. \n", "\n", "Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\\mid \\mid f(img)\\mid \\mid_2$=1); you won't have to worry about that in this assignment.\n", "\n", "**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:\n", "1. Compute the distance between the encodings of \"anchor\" and \"positive\": $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$\n", "2. Compute the distance between the encodings of \"anchor\" and \"negative\": $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$\n", "3. Compute the formula per training example: $ \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2 + \\alpha$\n", "3. Compute the full formula by taking the max with zero and summing over the training examples:\n", "$$\\mathcal{J} = \\sum^{m}_{i=1} \\large[ \\small \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2+ \\alpha \\large ] \\small_+ \\tag{3}$$\n", "\n", "#### Hints\n", "* Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.\n", "* For steps 1 and 2, you will sum over the entries of $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$ and $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$. \n", "* For step 4 you will sum over the training examples.\n", "\n", "#### Additional Hints\n", "* Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \\sum_{i=1}^{N}(x_{i} - y_{i})^{2}$\n", "* Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.\n", "* For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding. \n", "[tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied. \n", "* Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).\n", "* In step 4, when summing over training examples, the result will be a single scalar value.\n", "* For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# GRADED FUNCTION: triplet_loss\n", "\n", "def triplet_loss(y_true, y_pred, alpha = 0.2):\n", " \"\"\"\n", " Implementation of the triplet loss as defined by formula (3)\n", " \n", " Arguments:\n", " y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.\n", " y_pred -- python list containing three objects:\n", " anchor -- the encodings for the anchor images, of shape (None, 128)\n", " positive -- the encodings for the positive images, of shape (None, 128)\n", " negative -- the encodings for the negative images, of shape (None, 128)\n", " \n", " Returns:\n", " loss -- real number, value of the loss\n", " \"\"\"\n", " \n", " anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]\n", " \n", " ### START CODE HERE ### (≈ 4 lines)\n", " # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1\n", " pos_dist = tf.reduce_sum(tf.square(anchor - positive), axis = -1)\n", " # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1\n", " neg_dist = tf.reduce_sum(tf.square(anchor - negative), axis = -1)\n", " # Step 3: subtract the two previous distances and add alpha.\n", " basic_loss = pos_dist- neg_dist + alpha\n", " # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.\n", " loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))\n", " ### END CODE HERE ###\n", " \n", " return loss" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "loss = 528.143\n" ] } ], "source": [ "with tf.Session() as test:\n", " tf.set_random_seed(1)\n", " y_true = (None, None, None)\n", " y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),\n", " tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),\n", " tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))\n", " loss = triplet_loss(y_true, y_pred)\n", " \n", " print(\"loss = \" + str(loss.eval()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
\n", " **loss**\n", " \n", " 528.143\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2 - Loading the pre-trained model\n", "\n", "FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run. " ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": true }, "outputs": [], "source": [ "FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])\n", "load_weights_from_FaceNet(FRmodel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some examples of distances between the encodings between three individuals:\n", "\n", "\n", "
\n", "
**Figure 4**:
Example of distance outputs between three individuals' encodings
\n", "\n", "Let's now use this model to perform face verification and face recognition! " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3 - Applying the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.\n", "\n", "You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.1 - Face Verification\n", "\n", "Let's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image. \n", "\n", "Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true }, "outputs": [], "source": [ "database = {}\n", "database[\"danielle\"] = img_to_encoding(\"images/danielle.png\", FRmodel)\n", "database[\"younes\"] = img_to_encoding(\"images/younes.jpg\", FRmodel)\n", "database[\"tian\"] = img_to_encoding(\"images/tian.jpg\", FRmodel)\n", "database[\"andrew\"] = img_to_encoding(\"images/andrew.jpg\", FRmodel)\n", "database[\"kian\"] = img_to_encoding(\"images/kian.jpg\", FRmodel)\n", "database[\"dan\"] = img_to_encoding(\"images/dan.jpg\", FRmodel)\n", "database[\"sebastiano\"] = img_to_encoding(\"images/sebastiano.jpg\", FRmodel)\n", "database[\"bertrand\"] = img_to_encoding(\"images/bertrand.jpg\", FRmodel)\n", "database[\"kevin\"] = img_to_encoding(\"images/kevin.jpg\", FRmodel)\n", "database[\"felix\"] = img_to_encoding(\"images/felix.jpg\", FRmodel)\n", "database[\"benoit\"] = img_to_encoding(\"images/benoit.jpg\", FRmodel)\n", "database[\"arnaud\"] = img_to_encoding(\"images/arnaud.jpg\", FRmodel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.\n", "\n", "**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called \"identity\". You will have to go through the following steps:\n", "1. Compute the encoding of the image from `image_path`.\n", "2. Compute the distance between this encoding and the encoding of the identity image stored in the database.\n", "3. Open the door if the distance is less than 0.7, else do not open it.\n", "\n", "\n", "* As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html). \n", "* (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) \n", "\n", "#### Hints\n", "* `identity` is a string that is also a key in the `database` dictionary.\n", "* `img_to_encoding` has two parameters: the `image_path` and `model`." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# GRADED FUNCTION: verify\n", "\n", "def verify(image_path, identity, database, model):\n", " \"\"\"\n", " Function that verifies if the person on the \"image_path\" image is \"identity\".\n", " \n", " Arguments:\n", " image_path -- path to an image\n", " identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office.\n", " database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).\n", " model -- your Inception model instance in Keras\n", " \n", " Returns:\n", " dist -- distance between the image_path and the image of \"identity\" in the database.\n", " door_open -- True, if the door should open. False otherwise.\n", " \"\"\"\n", " \n", " ### START CODE HERE ###\n", " # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)\n", " encoding = img_to_encoding(image_path, model)\n", " \n", " # Step 2: Compute distance with identity's image (≈ 1 line)\n", " dist = np.linalg.norm(encoding - database[identity])\n", " \n", " # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)\n", " if dist < 0.7:\n", " print(\"It's \" + str(identity) + \", welcome in!\")\n", " door_open = True\n", " else:\n", " print(\"It's not \" + str(identity) + \", please go away\")\n", " door_open = False\n", " \n", " ### END CODE HERE ###\n", " \n", " return dist, door_open" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Younes is trying to enter the office and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's run your verification algorithm on this picture:\n", "\n", "" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "It's younes, welcome in!\n" ] }, { "data": { "text/plain": [ "(0.65939289, True)" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "verify(\"images/camera_0.jpg\", \"younes\", database, FRmodel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
\n", " **It's younes, welcome in!**\n", " \n", " (0.65939283, True)\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit (\"images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.\n", "" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "It's not kian, please go away\n" ] }, { "data": { "text/plain": [ "(0.86224014, False)" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "verify(\"images/camera_2.jpg\", \"kian\", database, FRmodel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
\n", " **It's not kian, please go away**\n", " \n", " (0.86224014, False)\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3.2 - Face Recognition\n", "\n", "Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in! \n", "\n", "To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them! \n", "\n", "You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs. \n", "\n", "**Exercise**: Implement `who_is_it()`. You will have to go through the following steps:\n", "1. Compute the target encoding of the image from image_path\n", "2. Find the encoding from the database that has smallest distance with the target encoding. \n", " - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.\n", " - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.\n", " - Compute the L2 distance between the target \"encoding\" and the current \"encoding\" from the database.\n", " - If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`." ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# GRADED FUNCTION: who_is_it\n", "\n", "def who_is_it(image_path, database, model):\n", " \"\"\"\n", " Implements face recognition for the office by finding who is the person on the image_path image.\n", " \n", " Arguments:\n", " image_path -- path to an image\n", " database -- database containing image encodings along with the name of the person on the image\n", " model -- your Inception model instance in Keras\n", " \n", " Returns:\n", " min_dist -- the minimum distance between image_path encoding and the encodings from the database\n", " identity -- string, the name prediction for the person on image_path\n", " \"\"\"\n", " \n", " ### START CODE HERE ### \n", " \n", " ## Step 1: Compute the target \"encoding\" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)\n", " encoding = img_to_encoding(image_path, model)\n", " \n", " ## Step 2: Find the closest encoding ##\n", " \n", " # Initialize \"min_dist\" to a large value, say 100 (≈1 line)\n", " min_dist = 100\n", " \n", " # Loop over the database dictionary's names and encodings.\n", " for (name, db_enc) in database.items():\n", " \n", " # Compute L2 distance between the target \"encoding\" and the current \"emb\" from the database. (≈ 1 line)\n", " dist = np.linalg.norm(encoding-db_enc)\n", "\n", " # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)\n", " if dist < min_dist:\n", " min_dist = dist\n", " identity = name\n", "\n", " ### END CODE HERE ###\n", " \n", " if min_dist > 0.7:\n", " print(\"Not in the database.\")\n", " else:\n", " print (\"it's \" + str(identity) + \", the distance is \" + str(min_dist))\n", " \n", " return min_dist, identity" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Younes is at the front-door and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's see if your who_it_is() algorithm identifies Younes. " ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "it's younes, the distance is 0.659393\n" ] }, { "data": { "text/plain": [ "(0.65939289, 'younes')" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "who_is_it(\"images/camera_0.jpg\", database, FRmodel)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
\n", " **it's younes, the distance is 0.659393**\n", " \n", " (0.65939283, 'younes')\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can change \"`camera_0.jpg`\" (picture of younes) to \"`camera_1.jpg`\" (picture of bertrand) and see the result." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Congratulations!\n", "\n", "* Your face recognition system is working well! It only lets in authorized persons, and people don't need to carry an ID card around anymore! \n", "* You've now seen how a state-of-the-art face recognition system works.\n", "\n", "#### Ways to improve your facial recognition model\n", "Although we won't implement it here, here are some ways to further improve the algorithm:\n", "- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increase accuracy.\n", "- Crop the images to just contain the face, and less of the \"border\" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Key points to remember\n", "- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. \n", "- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.\n", "- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Congrats on finishing this assignment! \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References:\n", "\n", "- Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)\n", "- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf) \n", "- The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.\n", "- Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet \n" ] } ], "metadata": { "coursera": { "course_slug": "convolutional-neural-networks", "graded_item_id": "IaknP", "launcher_item_id": "5UMr4" }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.0" } }, "nbformat": 4, "nbformat_minor": 2 }