{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Face detection using Cognitive Services\n",
"This walkthrough shows you how to use the cognitive services [Face API](https://azure.microsoft.com/services/cognitive-services/face/) to detect faces in an image. The API also returns various attributes such as the gender and age of each person. The sample images used in this walkthrough are from the [How-Old Robot](http://www.how-old.net) that uses the same APIs.\n",
"\n",
"You can run this example as a Jupyter notebook on [MyBinder](https://mybinder.org) by clicking on the launch Binder badge: \n",
"\n",
"[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=FaceAPI.ipynb)\n",
"\n",
"For more information, see the [REST API Reference](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).\n",
"\n",
"## Prerequisites\n",
"You must have a [Cognitive Services API account](https://docs.microsoft.com/azure/cognitive-services/cognitive-services-apis-create-account) with **Face API**. The [free trial](https://azure.microsoft.com/try/cognitive-services/?api=face-api) is sufficient for this quickstart. You need the subscription key provided when you activate your free trial, or you may use a paid subscription key from your Azure dashboard.\n",
"\n",
"## Running the walkthrough\n",
"To continue with this walkthrough, replace `subscription_key` with a valid subscription key."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"subscription_key = None\n",
"assert subscription_key"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, verify `face_api_url` and make sure it corresponds to the region you used when generating the subscription key. If you are using a trial key, you don't need to make any changes."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"face_api_url = 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect'"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Here is the URL of the image. You can experiment with different images by changing ``image_url`` to point to a different image and rerunning this notebook."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"image_url = 'https://how-old.net/Images/faces2/main007.jpg'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The next few lines of code call into the Face API to detect the faces in the image. In this instance, the image is specified via a publically visible URL. You can also pass an image directly as part of the request body. For more information, see the [API reference](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236). "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [
{
"data": {
"text/html": [
"Detected 2 faces in the image"
],
"text/plain": [
""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import requests\n",
"from IPython.display import HTML\n",
"\n",
"headers = { 'Ocp-Apim-Subscription-Key': subscription_key }\n",
" \n",
"params = {\n",
" 'returnFaceId': 'true',\n",
" 'returnFaceLandmarks': 'false',\n",
" 'returnFaceAttributes': 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise',\n",
"}\n",
"\n",
"response = requests.post(face_api_url, params=params, headers=headers, json={\"url\": image_url})\n",
"faces = response.json()\n",
"HTML(\"Detected %d faces in the image\"%len(faces))"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Finally, the face information can be overlaid of the original image using the `matplotlib` library in Python."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from PIL import Image\n",
"from matplotlib import patches\n",
"from io import BytesIO\n",
"\n",
"response = requests.get(image_url)\n",
"image = Image.open(BytesIO(response.content))\n",
"\n",
"plt.figure(figsize=(8,8))\n",
"ax = plt.imshow(image, alpha=0.6)\n",
"for face in faces:\n",
" fr = face[\"faceRectangle\"]\n",
" fa = face[\"faceAttributes\"]\n",
" origin = (fr[\"left\"], fr[\"top\"])\n",
" p = patches.Rectangle(origin, fr[\"width\"], fr[\"height\"], fill=False, linewidth=2, color='b')\n",
" ax.axes.add_patch(p)\n",
" plt.text(origin[0], origin[1], \"%s, %d\"%(fa[\"gender\"].capitalize(), fa[\"age\"]), fontsize=20, weight=\"bold\", va=\"bottom\")\n",
"_ = plt.axis(\"off\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Here are more images that can be analyzed using the same technique.\n",
"First, define a helper function, ``annotate_image`` to annotate an image given its URL by calling into the Face API."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"def annotate_image(image_url):\n",
" response = requests.post(face_api_url, params=params, headers=headers, json={\"url\": image_url})\n",
" faces = response.json()\n",
"\n",
" image_file = BytesIO(requests.get(image_url).content)\n",
" image = Image.open(image_file)\n",
"\n",
" plt.figure(figsize=(8,8))\n",
" ax = plt.imshow(image, alpha=0.6)\n",
" for face in faces:\n",
" fr = face[\"faceRectangle\"]\n",
" fa = face[\"faceAttributes\"]\n",
" origin = (fr[\"left\"], fr[\"top\"])\n",
" p = patches.Rectangle(origin, fr[\"width\"], \\\n",
" fr[\"height\"], fill=False, linewidth=2, color='b')\n",
" ax.axes.add_patch(p)\n",
" plt.text(origin[0], origin[1], \"%s, %d\"%(fa[\"gender\"].capitalize(), fa[\"age\"]), \\\n",
" fontsize=20, weight=\"bold\", va=\"bottom\")\n",
" plt.axis(\"off\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can then call ``annotate_image`` on other images. A few examples samples are shown below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"annotate_image(\"https://how-old.net/Images/faces2/main001.jpg\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"annotate_image(\"https://how-old.net/Images/faces2/main002.jpg\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"annotate_image(\"https://how-old.net/Images/faces2/main004.jpg\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
},
"ms_docs_meta": {
"author": "v-royhar",
"description": "Get information and code samples to help you quickly get started using the Face API with Python in Cognitive Services.",
"manager": "yutkuo",
"ms.author": "anroth",
"ms.date": "06/21/2017",
"ms.service": "cognitive-services",
"ms.technology": "face",
"ms.topic": "article",
"services": "cognitive-services",
"title": "Face API Python quick start | Microsoft Docs"
}
},
"nbformat": 4,
"nbformat_minor": 2
}